I recently found an excellent article by Bobby Azarion on why a strong AI based on digital computers will never happen (much of this is a concise summary of the arguments of philosopher John Searle). Here are a few more points:
From a philosophical point of view, we know that strong AI is impossible because induction is not deduction. Computers (including neural networks) are, at best, merely deductive engines: every output is merely a mathematical function of the inputs. And yet human beings are not merely deductive machines; on the contrary, the whole project of human knowledge moves forward only because of inductive leaps.
Now, induction is a massively controversial subject, both regarding what it actually is and regarding whether it is legitimate. Suffice it to say that if you already agree that 1) induction is how we form original principles that embody knowledge about the world; 2) therefore it must be central to what we mean by “intelligence”; 3) induction is not only legitimate but necessary to any claim of knowledge whatsoever (including claims about whether computer-based strong AI is possible); 4) the process of induction is fundamentally different from deduction; then it will be obvious to you why computer-based strong AI is impossible.
If you do not agree that induction is legitimate and fundamentally different from deduction then we are in a discussion about induction, which is a vast topic and obviously beyond the scope here. You can find some of my arguments for it in REASON and LIBERTY.
The fundamental units of computing are things like bits, logic gates, and memory. You can do a lot of things with these, but one thing to note about them is that they are extremely simple. Any normal child can understand them, regardless of the fact that they can be combined in such a dizzying array of potential results (video games, compilers, word processors, the Internet). A bit is either on or off and that’s it; a logic gate takes certain binary inputs and makes certain outputs and that’s it, etc.
On the other hand, atoms are the building blocks of us (if we talk of the parts of atoms, or the parts of their parts, ad infinitum, the discussion here doesn’t change), and while we understand much about them, they at present seem inscrutable to us overall. To name just a few issues: 1) LHC collider experiments just give us a zoo of subatomic particles we don’t understand. 2) We can’t model a simple water molecule accurately enough to predict the qualities of water, let alone more complex combinations. 3) Quantum mechanics vs. relativity. 4) Quantum entanglement. Etc.
Will we understand atoms overall (and not just in certain aspects) eventually? We should strive to, certainly, but nothing guarantees that there’s a final end of the road here. As Richard Feynman says, we may find that we go on forever, always learning new things about them, never finding a tidy “theory of everything.”
Bits are incommensurate with atoms. Our own intelligence is one of the most fascinating things about the universe, which itself is somehow produced by atoms that so complex our brightest geniuses can’t (yet) understand them. There is no reason to suppose that building blocks so trivial that any child can completely understand them could possibly produce similar results, except in the same way that a puppet is similar to the animal it represents – computers can make sophisticated puppets, and just as a fish can be tricked into thinking that a fisherman’s fly is the real thing, many humans might take the bait.
Neither I nor Searle claim that strong AI is per se impossible, because we humans exemplify intelligence, and if naturally created intelligence is possible then we have reason to believe that an artificial intelligence might be. The point here is just that it can’t be made out of programs running on a computer. Whether it is possible based on some other means would depend on the means, of course.