When Will Computers Match the Human Brain? Kurzweil vs. Myers

brain-computer-kurzweil-myers
Kurzweil and Myers weigh in on building artificial minds.

10 years? 20 years? 100 years? Never? There are about as many predictions for when artificial intelligence will match human intellect as there are AI researchers. More, really, when you figure in all the people who simply read about AI research and decide to make their own decisions (I’m in that boat myself). Processing power has been increasing exponentially for years, and few doubt that it will continue to do so for at least a few years longer. There are already plans underway to develop supercomputers that perform at least as many flops as the human brain in the next three years. But calculations are not thoughts and 10^16 calculations per second is not a recipe for cognition. So the questions remains, will we be able to use this processing power to accurately model the brain and create an artificial intelligence based upon that model? Futurists like Ray Kurweil say yes, but detractors like PZ Myers have a litany of reasons why the task is unreachable in the near future. Recently this debate has gotten a little ugly.

I attended this year’s Singularity Summit, which among other things is a place where those optimistic about the potential of AI come to discuss the topic. The big draw this year, as with most years, was a talk by Ray Kurzweil. His presentation, titled The Mind and How to Build One, explored the complex issues surrounding reverse-engineering the brain. Gizmodo wrote up a review of that presentation, and then PZ Myers, a biologist and noted skeptic, used that review as a basis for critiquing Kurzweil’s work on his blog. This attracted the attention of many commenters, both in favor and against Myers critques, and Gizmodo and Slashdot syndicated Myer’s post. Separating out the rational arguments from the ranting accusations isn’t easy, but let me try.

brain-computer-kurzweil
You know things are serious when webcomics start to comment on a debate. Click the image to go to the full strip at Scenes from a Multiverse.

Kurzweil was misquoted in the original Gizmodo article. He stated that different people in the field of AI had different ideas about if/when we’ll be able to reverse engineer the brain (meaning simulate or replicate the brain’s processing techniques). He mentioned that Henry Markram (at the Blue Brain Project) thinks this could be accomplished in the next decade. Kurzweil repeated his own estimates (which he’s stated many times in his books and lectures) that this will likely not occur until the end of the 2020s. What did Gizmodo report? That Kurzweil thinks the brain could be reverse engineered in the next decade. Myers assumed this as fact when he wrote his critique.

In his Singularity Summit talk, Kurzweil also mentioned that the human brain arises out of the information contained in the genome. He estimates that the genome data is roughly equal to 50 megabytes of which 25 megabytes is really needed for the brain. Kurzweil thinks that such data could be described by about one million lines of code. Gizmodo took this to mean that Kurzweil believes that the brain (in all its complexities) can be engineered from a million lines of code. Myers was not happy with this line of reasoning. He pointed out the complexities of protein folding, protein-protein interactions, cell to cell interactions, and all the other molecular biology systems that are likely necessary to the development of the human mind/brain. Scientists are currently struggling with understanding each of these systems, and modeling any of them is likely to require huge amounts of programming and processing power. Myers uses this perceived belief of Kurzweil as evidence that the futurist has no idea what he’s talking about.

Well, interestingly enough, Kurzweil seemed to have agreed with many of Myer’s critiques in the parts of his talk Gizmodo didn’t fully explain. First, his mentioning of the link between the genome and the mind was merely to comment on how complex systems can arise out of relatively little data. He pointed to fractal engineering, the importance of environmental interactions, and other external factors as being the process by which limited data becomes an enormously complex thinking machine. Furthermore, during the question and answer portion of his talk, Kurzweil went on to highlight the importance of education and learning experiences in mind/brain development. This is why, in part, Kurzweil believes Markram’s estimates are too optimistic, or rather why he believes Markram’s simulated brain won’t be an artificial intelligence (at first). If you attended all of Kurzweil’s Singularity Summit lecture, and read all of Myers blog post you start to see that both men find the premise of reverse engineering the human brain to be a daunting and complex task that we do not yet fully understand.

This is not to say that I think that the two would easily hammer out their differences over a glass of wine. Kurzweil’s estimate of 2029 (or so) for the emergence of human-level AI is still very optimistic, and it seems that Myers finds some of the molecular systems in the brain, and the interactions of these systems to be incomprehensible in the near future (perhaps forever?). These two have very different ideas of what AI may be able to accomplish in the years ahead.

It’s unfortunate then, to see these ideas meet at such unseemly angles. Myers should not have based his critiques on a second hand summary of Kurzweil’s speech. Eventually the Singularity Institute will release Summit videos of the presentations, and Myers would have been able to hear Kurzweil’s words for himself. Undoubtedly he would still find things to object to, but they would be things that Kurzweil actually said, and in context. Kurzweil, for his part, might want to make all his talks and slides openly accessible so that critiques can reference them directly.

The sad thing is that all of this ad hominem and frenzied internet commentary really draws us away from meaningful debate. Here are some questions I have that I would love to see get the same attention as these recent misquotes:

  • Can the principles of operation for the brain be divorced from its architecture? That is, can we build a program that thinks like a human brain but does not need to mimic the cell biology that the brain uses?
  • Is it possible to build an objective measure for level of intelligence, either human or nonhuman? Can we say that X program or Y person is Z more intelligent than another? (Shane Legg has already come up with an equation he thinks would work).
  • Can we test for consciousness? (Kurzweil has stated that he believes the answer is probably not – Turing Tests may be able to measure the believability of an alleged consciousness but not the consciousness itself).
  • How much processing power will we really need to simulate the human brain at the neuron level? At the molecular level? As a mind?
  • Will we develop artificial intelligence by creating an artificial brain and teaching it to be intelligent?
  • Will we develop artificial intelligence by creating simple learning machines and teaching them to be smarter?
  • Will we develop artificial intelligence at all?

Whether you’re a supporter of Kurzweil, Myers, or the flying spaghetti monster it would be nice to hear what you have to say about the development of artificial intelligence. Leave a comment, let me know.

[image credit Wikicommons (modified), Scenes from a Multiverse/Jonathan Rosenberg]
[source: Pharyngula, Gizmodo]

RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured