I caught this excellent lecture from last year by Shane Legg, who discusses the future of "intelligent" machines. He goes into some good technical discussions, but there's also discussion for a general audience as well.
I'm going to summarize a few takeaways, and then supplement them with some of my own thoughts and reactions.
First, Dr. Legg attempts to give a general definition for intelligence, which is not an easy thing to do. Intelligence is not the same as consciousness - which is fortunate because consciousness is even more difficult to define. Dr. Legg's definition is that intelligence is the ability to "perform well in a wide range of environments". I think the key example here is that a machine that's good at only chess is less intelligent than a more general machine that can be taught chess.
The idea of "learning" and "machine learning" is also different from intelligence, but they're connection is that in order to build a more intelligent machine, it has to learn from it's environment and from it's previous decisions. If a machine can't learn, then it's intelligence does not increase beyond it's original programming. You can in theory build an intelligence machine that doesn't learn, but it's utterly impractical to consider a wide range of environments just from the original program.
There's an interesting discussion at the end of the lecture about how our understanding of neuroscience (and the expected breakthroughs in that field over the next decade) will enhance our ability to create AIs. Interestingly, some of the algorithms that scientists are finding in the brain having already been studied by mathematicians and computer scientists.
Dr. Legg also gives a theoretical basis for how an intelligent machine might work. I want to discuss that, so please wait for my next post!
[This lecture is broken into 10 minute segments, so go to YouTube to find the rest]