Quantcast
Membership Signup
Singularity University

When Will Computers Match the Human Brain? Kurzweil vs. Myers

brain-computer-kurzweil-myers

Kurzweil and Myers weigh in on building artificial minds.

10 years? 20 years? 100 years? Never? There are about as many predictions for when artificial intelligence will match human intellect as there are AI researchers. More, really, when you figure in all the people who simply read about AI research and decide to make their own decisions (I’m in that boat myself). Processing power has been increasing exponentially for years, and few doubt that it will continue to do so for at least a few years longer. There are already plans underway to develop supercomputers that perform at least as many flops as the human brain in the next three years. But calculations are not thoughts and 10^16 calculations per second is not a recipe for cognition. So the questions remains, will we be able to use this processing power to accurately model the brain and create an artificial intelligence based upon that model? Futurists like Ray Kurweil say yes, but detractors like PZ Myers have a litany of reasons why the task is unreachable in the near future. Recently this debate has gotten a little ugly.

I attended this year’s Singularity Summit, which among other things is a place where those optimistic about the potential of AI come to discuss the topic. The big draw this year, as with most years, was a talk by Ray Kurzweil. His presentation, titled The Mind and How to Build One, explored the complex issues surrounding reverse-engineering the brain. Gizmodo wrote up a review of that presentation, and then PZ Myers, a biologist and noted skeptic, used that review as a basis for critiquing Kurzweil’s work on his blog. This attracted the attention of many commenters, both in favor and against Myers critques, and Gizmodo and Slashdot syndicated Myer’s post. Separating out the rational arguments from the ranting accusations isn’t easy, but let me try.

brain-computer-kurzweil

You know things are serious when webcomics start to comment on a debate. Click the image to go to the full strip at Scenes from a Multiverse.

Kurzweil was misquoted in the original Gizmodo article. He stated that different people in the field of AI had different ideas about if/when we’ll be able to reverse engineer the brain (meaning simulate or replicate the brain’s processing techniques). He mentioned that Henry Markram (at the Blue Brain Project) thinks this could be accomplished in the next decade. Kurzweil repeated his own estimates (which he’s stated many times in his books and lectures) that this will likely not occur until the end of the 2020s. What did Gizmodo report? That Kurzweil thinks the brain could be reverse engineered in the next decade. Myers assumed this as fact when he wrote his critique.

In his Singularity Summit talk, Kurzweil also mentioned that the human brain arises out of the information contained in the genome. He estimates that the genome data is roughly equal to 50 megabytes of which 25 megabytes is really needed for the brain. Kurzweil thinks that such data could be described by about one million lines of code. Gizmodo took this to mean that Kurzweil believes that the brain (in all its complexities) can be engineered from a million lines of code. Myers was not happy with this line of reasoning. He pointed out the complexities of protein folding, protein-protein interactions, cell to cell interactions, and all the other molecular biology systems that are likely necessary to the development of the human mind/brain. Scientists are currently struggling with understanding each of these systems, and modeling any of them is likely to require huge amounts of programming and processing power. Myers uses this perceived belief of Kurzweil as evidence that the futurist has no idea what he’s talking about.

Well, interestingly enough, Kurzweil seemed to have agreed with many of Myer’s critiques in the parts of his talk Gizmodo didn’t fully explain. First, his mentioning of the link between the genome and the mind was merely to comment on how complex systems can arise out of relatively little data. He pointed to fractal engineering, the importance of environmental interactions, and other external factors as being the process by which limited data becomes an enormously complex thinking machine. Furthermore, during the question and answer portion of his talk, Kurzweil went on to highlight the importance of education and learning experiences in mind/brain development. This is why, in part, Kurzweil believes Markram’s estimates are too optimistic, or rather why he believes Markram’s simulated brain won’t be an artificial intelligence (at first). If you attended all of Kurzweil’s Singularity Summit lecture, and read all of Myers blog post you start to see that both men find the premise of reverse engineering the human brain to be a daunting and complex task that we do not yet fully understand.

This is not to say that I think that the two would easily hammer out their differences over a glass of wine. Kurzweil’s estimate of 2029 (or so) for the emergence of human-level AI is still very optimistic, and it seems that Myers finds some of the molecular systems in the brain, and the interactions of these systems to be incomprehensible in the near future (perhaps forever?). These two have very different ideas of what AI may be able to accomplish in the years ahead.

It’s unfortunate then, to see these ideas meet at such unseemly angles. Myers should not have based his critiques on a second hand summary of Kurzweil’s speech. Eventually the Singularity Institute will release Summit videos of the presentations, and Myers would have been able to hear Kurzweil’s words for himself. Undoubtedly he would still find things to object to, but they would be things that Kurzweil actually said, and in context. Kurzweil, for his part, might want to make all his talks and slides openly accessible so that critiques can reference them directly.

The sad thing is that all of this ad hominem and frenzied internet commentary really draws us away from meaningful debate. Here are some questions I have that I would love to see get the same attention as these recent misquotes:

  • Can the principles of operation for the brain be divorced from its architecture? That is, can we build a program that thinks like a human brain but does not need to mimic the cell biology that the brain uses?
  • Is it possible to build an objective measure for level of intelligence, either human or nonhuman? Can we say that X program or Y person is Z more intelligent than another? (Shane Legg has already come up with an equation he thinks would work).
  • Can we test for consciousness? (Kurzweil has stated that he believes the answer is probably not – Turing Tests may be able to measure the believability of an alleged consciousness but not the consciousness itself).
  • How much processing power will we really need to simulate the human brain at the neuron level? At the molecular level? As a mind?
  • Will we develop artificial intelligence by creating an artificial brain and teaching it to be intelligent?
  • Will we develop artificial intelligence by creating simple learning machines and teaching them to be smarter?
  • Will we develop artificial intelligence at all?

Whether you’re a supporter of Kurzweil, Myers, or the flying spaghetti monster it would be nice to hear what you have to say about the development of artificial intelligence. Leave a comment, let me know.

[image credit Wikicommons (modified), Scenes from a Multiverse/Jonathan Rosenberg]
[source: Pharyngula, Gizmodo]

Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

58 comments

  • Brad Arnold says:

    Hopefully software engineers don’t make the fundamental mistake of AI: trying to model the software on the human brain. Does the best chess programs model their analysis on human thought patterns? Furthermore, I suggest you AI fans come to the realization that private AI research maybe light years ahead of that visible from a public perspective. It is in the best interests of corporations to create and use an AI without public scrutiny (or the knowledge of their competitors).

    Frankly, most people are without a clue as to the secret government, nor the vast difference in available technology between them and the rest of humanity.

    • Ciantic says:

      I think the best example in history of this is cryptography based research, there is a great need for good algorithms that are closed and privately owned but still the best and most reliable algorithms are the open ones. Secondly even if there were a huge out reach from private companies to closed AI research, their money and time, and most importantly collaboration capabilities is not in any way comparable to Universities or other open institutes.

      Imagining private companies creating a web of “underground” researchers is a rather ridiculous, since companies main purpose is to make profit by usually competing with each other. There is no real incentive to share the knowledge even behind doors. (There probably is some sharing, but not comparable to Unis) It’s widely known that Universities on the other hand are in a perfect position to do research on subjects that are not likely to yield profit in near future, or perhaps ever, to the researchers themselves or heads of Universities.

      I suspect the private owned closed research is not as glorified as you may think. It is very scattered, and not very competitive besides little areas of research where a small group of people can make difference.

  • Brad Arnold says:

    Hopefully software engineers don’t make the fundamental mistake of AI: trying to model the software on the human brain. Does the best chess programs model their analysis on human thought patterns? Furthermore, I suggest you AI fans come to the realization that private AI research maybe light years ahead of that visible from a public perspective. It is in the best interests of corporations to create and use an AI without public scrutiny (or the knowledge of their competitors).

    Frankly, most people are without a clue as to the secret government, nor the vast difference in available technology between them and the rest of humanity.

    • Ciantic says:

      I think the best example in history of this is cryptography based research, there is a great need for good algorithms that are closed and privately owned but still the best and most reliable algorithms are the open ones. Secondly even if there were a huge out reach from private companies to closed AI research, their money and time, and most importantly collaboration capabilities is not in any way comparable to Universities or other open institutes.

      Imagining private companies creating a web of “underground” researchers is a rather ridiculous, since companies main purpose is to make profit by usually competing with each other. There is no real incentive to share the knowledge even behind doors. (There probably is some sharing, but not comparable to Unis) It’s widely known that Universities on the other hand are in a perfect position to do research on subjects that are not likely to yield profit in near future, or perhaps ever, to the researchers themselves or heads of Universities.

      I suspect the private owned closed research is not as glorified as you may think. It is very scattered, and not very competitive besides little areas of research where a small group of people can make difference.

  • Emanuel says:

    Regarding the test for consciousness, I think consciousness is simply far too vague of a concept. To have a measurable quantity, I think we should focus on self-awareness – the development of the ego. At the most basic level this is characterized by things like recognizing yourself, but it extends to things like self-reflection and the ability to understand other minds. The Turing test does not test this directly, but it may turn out that to always answer believably, a computer may need to understand what the person on the other end is thinking rather than being trained merely on a loose collection of concepts. And to do that, it would need a model of itself (well, the software) as a baseline.

  • Emanuel says:

    Regarding the test for consciousness, I think consciousness is simply far too vague of a concept. To have a measurable quantity, I think we should focus on self-awareness – the development of the ego. At the most basic level this is characterized by things like recognizing yourself, but it extends to things like self-reflection and the ability to understand other minds. The Turing test does not test this directly, but it may turn out that to always answer believably, a computer may need to understand what the person on the other end is thinking rather than being trained merely on a loose collection of concepts. And to do that, it would need a model of itself (well, the software) as a baseline.

  • Mark Bruce says:

    I found it quite disappointing to read PZ Meyer’s post, which amounted to little more than a prolonged – possibly gratuitous – intellectually lazy straw-man argument. He took one quote from RK – completely out of context – and proceeded to tear it down. Possibly more disappointing was watching rampant confirmation-bias kick in with most of the commenters not recognising this and more than happy to jump on the bandwagon heading for the RK witch hunt.

    When RK actually talks about reverse engineering the brain it is never from the perspective of a computer simulation starting from first principles and the compressed genome, rather it is brain scanning – destructive and otherwise – neural network modelling, etc along the lines of techniques that are being used in the BlueBrain Project and the Human Connectome Project for example.

    Which, when you see the progress to date, a 15 – 25 year timeframe seems eminently feasible. Especially when you consider that teasing out the design principles for intelligent processing systems allows you to build intelligent problem-solving machines that will allow you to accelerate progress (modelling / simulation / testing) in this field and assist us onto full-brain (conscious?) simulations on non-organic machine substrates not too much after that.

    And later, in future, once the computational resources at our disposal reach astronomical proportions (from a primitive 2010 perspective) we’ll probably run brain simulations from first principles starting at the genome scale just for fun anyway :)

  • Mark Bruce says:

    I found it quite disappointing to read PZ Meyer’s post, which amounted to little more than a prolonged – possibly gratuitous – intellectually lazy straw-man argument. He took one quote from RK – completely out of context – and proceeded to tear it down. Possibly more disappointing was watching rampant confirmation-bias kick in with most of the commenters not recognising this and more than happy to jump on the bandwagon heading for the RK witch hunt.

    When RK actually talks about reverse engineering the brain it is never from the perspective of a computer simulation starting from first principles and the compressed genome, rather it is brain scanning – destructive and otherwise – neural network modelling, etc along the lines of techniques that are being used in the BlueBrain Project and the Human Connectome Project for example.

    Which, when you see the progress to date, a 15 – 25 year timeframe seems eminently feasible. Especially when you consider that teasing out the design principles for intelligent processing systems allows you to build intelligent problem-solving machines that will allow you to accelerate progress (modelling / simulation / testing) in this field and assist us onto full-brain (conscious?) simulations on non-organic machine substrates not too much after that.

    And later, in future, once the computational resources at our disposal reach astronomical proportions (from a primitive 2010 perspective) we’ll probably run brain simulations from first principles starting at the genome scale just for fun anyway :)

  • Frii says:

    I’m a novice. Wouldn’t you want to use algorithms as a reflex type action for things we do but are not thinking about. I don’t think about walking. That just part of what has to happen to get to A to B.

    Then there thing you know. If I see a rake I know everything that I learned about it and can easy decide from that menu what need to be done from past experiences.

    Now learning and self driven that hard and that what your probably trying to solve. drivers: other people telling you to do something, self pride, to be helpful, survival/pain/hunger maybe its the wanting the AI needs.

  • Frii says:

    I’m a novice. Wouldn’t you want to use algorithms as a reflex type action for things we do but are not thinking about. I don’t think about walking. That just part of what has to happen to get to A to B.

    Then there thing you know. If I see a rake I know everything that I learned about it and can easy decide from that menu what need to be done from past experiences.

    Now learning and self driven that hard and that what your probably trying to solve. drivers: other people telling you to do something, self pride, to be helpful, survival/pain/hunger maybe its the wanting the AI needs.

  • Michal says:

    Since term “consciousness” is undefined, it is _impossible_ to build machine that is conscious. Whatever you build, some people say that it is not what they expected. The only real questions are: Which tasks currently doable only by humans will be possible to perform by machines and when. There are several very interesting tasks that already are or will probably be addressed in the near future: education, surveillance, warfare etc. But the crucial one and the one that _will_ change the whole civilization is: science. The only real question about AI is: When we will be able to create machine that will become better scientist than human being?

  • Michal says:

    Since term “consciousness” is undefined, it is _impossible_ to build machine that is conscious. Whatever you build, some people say that it is not what they expected. The only real questions are: Which tasks currently doable only by humans will be possible to perform by machines and when. There are several very interesting tasks that already are or will probably be addressed in the near future: education, surveillance, warfare etc. But the crucial one and the one that _will_ change the whole civilization is: science. The only real question about AI is: When we will be able to create machine that will become better scientist than human being?

  • John Newbury says:

    Q. Can the principles of operation for the brain be divorced from its architecture? …

    A. Provably possible in principle if brains are machines, which they surely are – a brain can then be simulated by a Turing machine. Add a genuine/quantum random number generator if you wish, but no detectable difference.

    Probably possibly in practice too – that is the big question – but I think everyone is guessing just now. Like controlled fusion and manned Mars missions, it’s been 30 years away for many decades, and will probably remain so for some decades.

    Partly depends when we agree that we have done it. To date, any aspect of intelligence that has been achieved has typically been dismissed as not part of “true” intelligence after all! Passing a decent Turing Test (not Loebner rubbish) would be sufficient but not necessary. Probably only long after becoming intelligent and conscious (by any useful definitions) would one adequately model the nuances of human trivia to pass such a test! (C3PO would be fine: very intelligent and surely very conscious, but would quickly fail a Turing test!)

    Q. Is it possible to build an objective measure for level of intelligence, either human or nonhuman? …

    A. Probably no single measure, It would at least depend on the environment that it had to be optimal for. E.g., Deep Blue is better than any human in a chess environment. The best of humans would also probably fair badly if they had to operate in a chimp or whale society, even if somehow brought up as a chimp or whale from birth. Bats and electric eels would surely be better at interpreting (and making sensible decisions about) their sensory world than we would be, even if we had their senses. Also, some intellectual competitions are intransitive, e.g., A beats B, B beats C and C beats A, even in a given environment, due to their different approaches – no scalar measure works here. (I have demonstrated this for the game of Diplomacy – see http://johnnewbury.co.cc/diplomacy/tournaments/tournament9.htm).

    Even a brain that could, in principle, cope well in any environment, would not actually do so until it had gained enough knowledge of a specific environment – until then it would not act intelligently there (e.g., failing a Turning test because it did not know the language being used, or the trivia known by any human).

    Q. Can we test for consciousness? …

    A. With an ego-centric Cogito Ergo Sum philosophy, consciousness can never even be demonstrated even in other human; with the equivalent human-centric philosophy (a la Searle), consciousness can never be demonstrated in machines. But with a more useful and (therefore) sensible quacks-like-a duck philosophy, then it surely can be demonstrated (by a decent Turing test, or something fairer, but obviously more than effectively only saying “I am conscious!”).

    Q. How much processing power will we really need to simulate the human brain at the neuron level?

    A: Surely the neuron (cell) level and below need not be simulated (even if neural nets are used) – especially not Penrose’s quantum level. We need not copy brains directly: planes use aerodynamics like birds, but not flapping wings, feathers, etc.

    Q. Will we develop artificial intelligence by creating an artificial brain and teaching it to be intelligent? Will we develop artificial intelligence by creating simple learning machines and teaching them to be smarter?

    A: Surely impractical to hand-code everything. Needs some initial hand-coded basic intelligence, to acquire more intelligence (heuristics and facts that are useful for is environment). However, the old AI dream of a simple boot strap to intelligence has failed – obviously needs a lot more initial human intervention – unless prepared to devote geological amounts of space-time – say as much used by human brain evolution.

    Q. Will we develop artificial intelligence at all?

    A. Surely yes – unless we destroy ourselves first.

  • John Newbury says:

    Q. Can the principles of operation for the brain be divorced from its architecture? …

    A. Provably possible in principle if brains are machines, which they surely are – a brain can then be simulated by a Turing machine. Add a genuine/quantum random number generator if you wish, but no detectable difference.

    Probably possibly in practice too – that is the big question – but I think everyone is guessing just now. Like controlled fusion and manned Mars missions, it’s been 30 years away for many decades, and will probably remain so for some decades.

    Partly depends when we agree that we have done it. To date, any aspect of intelligence that has been achieved has typically been dismissed as not part of “true” intelligence after all! Passing a decent Turing Test (not Loebner rubbish) would be sufficient but not necessary. Probably only long after becoming intelligent and conscious (by any useful definitions) would one adequately model the nuances of human trivia to pass such a test! (C3PO would be fine: very intelligent and surely very conscious, but would quickly fail a Turing test!)

    Q. Is it possible to build an objective measure for level of intelligence, either human or nonhuman? …

    A. Probably no single measure, It would at least depend on the environment that it had to be optimal for. E.g., Deep Blue is better than any human in a chess environment. The best of humans would also probably fair badly if they had to operate in a chimp or whale society, even if somehow brought up as a chimp or whale from birth. Bats and electric eels would surely be better at interpreting (and making sensible decisions about) their sensory world than we would be, even if we had their senses. Also, some intellectual competitions are intransitive, e.g., A beats B, B beats C and C beats A, even in a given environment, due to their different approaches – no scalar measure works here. (I have demonstrated this for the game of Diplomacy – see http://johnnewbury.co.cc/diplomacy/tournaments/tournament9.htm).

    Even a brain that could, in principle, cope well in any environment, would not actually do so until it had gained enough knowledge of a specific environment – until then it would not act intelligently there (e.g., failing a Turning test because it did not know the language being used, or the trivia known by any human).

    Q. Can we test for consciousness? …

    A. With an ego-centric Cogito Ergo Sum philosophy, consciousness can never even be demonstrated even in other human; with the equivalent human-centric philosophy (a la Searle), consciousness can never be demonstrated in machines. But with a more useful and (therefore) sensible quacks-like-a duck philosophy, then it surely can be demonstrated (by a decent Turing test, or something fairer, but obviously more than effectively only saying “I am conscious!”).

    Q. How much processing power will we really need to simulate the human brain at the neuron level?

    A: Surely the neuron (cell) level and below need not be simulated (even if neural nets are used) – especially not Penrose’s quantum level. We need not copy brains directly: planes use aerodynamics like birds, but not flapping wings, feathers, etc.

    Q. Will we develop artificial intelligence by creating an artificial brain and teaching it to be intelligent? Will we develop artificial intelligence by creating simple learning machines and teaching them to be smarter?

    A: Surely impractical to hand-code everything. Needs some initial hand-coded basic intelligence, to acquire more intelligence (heuristics and facts that are useful for is environment). However, the old AI dream of a simple boot strap to intelligence has failed – obviously needs a lot more initial human intervention – unless prepared to devote geological amounts of space-time – say as much used by human brain evolution.

    Q. Will we develop artificial intelligence at all?

    A. Surely yes – unless we destroy ourselves first.

  • Frii says:

    I am a novice and would like to add to my above comment. Could time a internial clock that acts as a motor to drive an AI thought
    process be part of the solution? Where the AI has a flim like program that is constenly running through it like
    a projector and its pictures, sounds, smells, tast, and what is felt
    it takes in is compaired to its past similar memories and formulates things to
    do like play or understanding and thinking.

  • Frii says:

    I am a novice and would like to add to my above comment. Could time a internial clock that acts as a motor to drive an AI thought
    process be part of the solution? Where the AI has a flim like program that is constenly running through it like
    a projector and its pictures, sounds, smells, tast, and what is felt
    it takes in is compaired to its past similar memories and formulates things to
    do like play or understanding and thinking.

  • Reese Jones says:

    There is less drama or controversy in views on AGI & bio-brains than may appear.

    Most of the (non biologist) AGI (artificial general intelligence) workers would acknowledge that the “intelligence” they are attempting to model by “artificial… implementations” (computer based) is merely for linguistic tasks, and by no means a full biological neurophysiologic intelligence that performs all skills of a human or animal. These Turing test level GAI attempts answer linguistic questions via dialog and solve problems – with an input/output interface of abstract language/text to pass the Turing AI test (relatively, a very narrow tiny fraction of the functionality of a whole brain).

    Speculation of a computer passing a Turing test within 20 years is not a pretense that this equates to a fully “reverse engineered” then fully simulated biological brain. Such simulation would be focused on the human linguistic & conceptual capabilities as different from a somewhat less intelligent primate or compromised functional human. Likely different in less than 1% of the brain’s functions.

    The AGI (GAI) being designed for passing a Turing test would not “know” basic anatomical bio-functions how to breathe, hunger, eat a banana, hunt for food or water, or physically interact within relationships, mate/breed or survive ecosystem changes. A primate’s brain is anatomically, neurobiologically “wired” and neurochemically very similar to a human brain and can do all these intelligent biological tasks.

    But this very similar primate’s neurophysiology can’t pass an AI Turing test (and even some compromised humans cannot pass this test). A Turing test passing GAI simulator implemented in a computer (or any substrate) would only be modeling the linguistic & memory dynamics unique to the subtle distinction between the human and ape’s cognitive capability for example (<1%). Very far from a "full brain" simulation.

    Such GAI systems may have a human like conversation/dialog via text, – but would not "know" how to feel or scratch a flea off using a hind leg (a task trivial for a rodent brain's "intelligence").

    Computer AI has been demonstrated functional better than human in very narrow knowledge/decision "verticals" e.g. Chess, search or directory services. By linking multiple of these verticals, these combined AI's become more general (GAI) – but still narrowly linguistic – very far from any neuro-physiology function of a complex organism and very very far from simulating a full brain or human.

    The polarized view's sensationalist hype is exaggerated by the media to make a more attractive drama… but from personal experience: most people working in these areas appreciate the many unknown complexities and myriad subtleties, including Kurzweil, Sandberg and PZ Myers. But each have personal domain expertise bias (and limits) in approaching and addressing this complex problem that is intrinsically multidisciplinary.

  • Reese Jones says:

    There is less drama or controversy in views on AGI & bio-brains than may appear.

    Most of the (non biologist) AGI (artificial general intelligence) workers would acknowledge that the “intelligence” they are attempting to model by “artificial… implementations” (computer based) is merely for linguistic tasks, and by no means a full biological neurophysiologic intelligence that performs all skills of a human or animal. These Turing test level GAI attempts answer linguistic questions via dialog and solve problems – with an input/output interface of abstract language/text to pass the Turing AI test (relatively, a very narrow tiny fraction of the functionality of a whole brain).

    Speculation of a computer passing a Turing test within 20 years is not a pretense that this equates to a fully “reverse engineered” then fully simulated biological brain. Such simulation would be focused on the human linguistic & conceptual capabilities as different from a somewhat less intelligent primate or compromised functional human. Likely different in less than 1% of the brain’s functions.

    The AGI (GAI) being designed for passing a Turing test would not “know” basic anatomical bio-functions how to breathe, hunger, eat a banana, hunt for food or water, or physically interact within relationships, mate/breed or survive ecosystem changes. A primate’s brain is anatomically, neurobiologically “wired” and neurochemically very similar to a human brain and can do all these intelligent biological tasks.

    But this very similar primate’s neurophysiology can’t pass an AI Turing test (and even some compromised humans cannot pass this test). A Turing test passing GAI simulator implemented in a computer (or any substrate) would only be modeling the linguistic & memory dynamics unique to the subtle distinction between the human and ape’s cognitive capability for example (<1%). Very far from a "full brain" simulation.

    Such GAI systems may have a human like conversation/dialog via text, – but would not "know" how to feel or scratch a flea off using a hind leg (a task trivial for a rodent brain's "intelligence").

    Computer AI has been demonstrated functional better than human in very narrow knowledge/decision "verticals" e.g. Chess, search or directory services. By linking multiple of these verticals, these combined AI's become more general (GAI) – but still narrowly linguistic – very far from any neuro-physiology function of a complex organism and very very far from simulating a full brain or human.

    The polarized view's sensationalist hype is exaggerated by the media to make a more attractive drama… but from personal experience: most people working in these areas appreciate the many unknown complexities and myriad subtleties, including Kurzweil, Sandberg and PZ Myers. But each have personal domain expertise bias (and limits) in approaching and addressing this complex problem that is intrinsically multidisciplinary.

  • David Wood says:

    Though I tend to believe Kurzweil’s time line is the correct one for AGI, I don’t think I have ever seen a break down of the calculations he uses to reach his various target dates for landmarks in AGI. Can anyone point me to such a breakdown or has he not made those calculations available?

    • John Newbury says:

      On the contrary, I find Kurzweil’s extrapolations dubious in general, e.g., in his fascinating but flawed book, “The singularity is near”. (This may be the best reference for his evidence and timescales, albeit probably updated since.) Too many are based on extrapolating apparent linearity in log-log graphs (apparently scale-free phenomena), which are well known to be vastly more prone to extrapolation error than even extrapolations of lines in linear scales. All known apparent scale-free real-world phenomena tail off eventually, in each direction. Many of his examples are far from straight lines anyway, often erratic or tailing off at time now, since they cover vastly greater ranges of physics, technology, sociology, or whatever. Furthermore, some data is blatantly subjective, such as the rate of “paradigm shifts”. (As each new technology is explored, even if still exponential growth, the rates may be orders of magnitude different, e.g., when logic chips hit atomic limits and have to explore sub-atomic or more overtly quantum domains, or maybe can only exploit parallelism.) Who knows how many inconvenient contrary indicators were conveniently omitted.

      Is there no danger in global war or global warming, say, because someone has always found the necessary technical and political fixes before, allowing the rest of us to ignore the problems as usual? Dangerous assumption! Is there always growth (let alone exponential and especially double exponential growth) of an economy, population or product? No!

      Even if computation power forever expands at its current rate (perhaps even with a double exponent), sensor and manipulators may have a vastly lower improvement rate (e.g., if their development likewise continues at their present rates, appearing almost linear), thereby becoming the bottleneck to exploring the laws and space of the universe. If not them, then some other weak link will appear (if past experience continues) – one way or another, always postponing even closely approaching Kurzweil’s singularity. Or we may bomb ourselves back to the stone age, then without even the prospect of easy to obtain oil and minerals when ready to use them once more. (If we approach the singularity too quickly, say as fast Kurzweil believes and hopes, rather than immortality, we have may even dramatically speed up our journey into oblivion!)

      • David Wood says:

        John, are you aware of Kurzweil’s specific process for reaching his target dates? I’m not talking about a general process or things like the log-log graphs, I’m talking about the specific mathematical process used to reach specific date ranges for AGI. I’m looking for a comprehensive mathematical formula he is using for that, if any such thing exists. I pretty much know how he derives projections for things like the increase of solar power electrical generation, but not about AGI. Guess I should just email him and ask straight out.

        As to our species’ oblivion, I think we can look at the trajectory of the rise in complexity of biological life as a comforting analogy to be confident of our species continued existence. Did bacteria go extinct when eukaryotes came into being, or when multicellular life began? Did all less intelligent species start going extinct when the first creature with a cerebral cortex emerged? No. Some specific species did get out-competed, but in the current world there are niches for the whole range of life complexity from archaebacteria to human beings.

        I believe that similarly, baseline human intelligence will still exist in the future and there will be a continuum between it and the most advanced non-human or augmented-human intelligences.

        • Brent says:

          In “The Singularity is near” RK goes into the number of neurons in the human brain, number of interconnections between them, the speed of signaling, … and comes up with Calculations Per Second the human brain is capable of. His book is in libraries, …
          I’m sure this all gets updated in his newest book.

          • David Wood says:

            Thanks Brent. I’ve read “The Singularity is Near” a couple of times and am aware of his calculations for arriving at how many calculations per second the human brain is capable of and plotting when computers will achieve that benchmark, but I think Kurzweil would be the first to point out that human level calc/sec does not equal human level ‘intelligence’, which is what I am referring to in my question.

        • John Newbury says:

          David,

          Sorry – I have no more info than in “The Singularity is Near”, which you have evidently read twice as many times as me. :-) In any case, I doubt that any formula could sensibly be used, except perhaps to encapsulate our current, highly unreliable and subjective, beliefs. Or perhaps, like Drake’s equation for the probable number of civilisations in the universe, only useful to help define where our ignorance lies, not for any prediction! I have not seen the formula mentioned in the article.

          Regarding confidence that we will avoid oblivion by analogy with the history of life: a dangerous analogy! First, there is evidence (from an apparent genetic bottleneck, indicating a small population) that mankind was already nearly wiped out in the early days. Neanderthals did, indeed die out, albeit probably only due to competition from H.sapiens. The MAD cold-war philosophy, and Cuban missile crisis in particular, brought us extremely close to extinction.

          Similarly for the analogy with life: It has made many experiments over the aeons, most of them failures. Life hung on, it is true, but many experiments died out without leaving derivative forms. On a count of species, or even larger groups, almost all have died out! Early life produced oxygen, which eventually poisoned all but a very few of the lineages that use that mode of life! Dinosaurs, as a group, had a very good run, but eventually proved to have made the wrong long-term evolutionary choice.

          Maybe intelligence will prove to be, in the long-term (yet vastly shorter than for evolution of the past, e.g., given Kurzweil’s evidence) a bad experiment. Never before did a species have the ability, and hair trigger mechanisms, to destroy all members of its species (and most others besides, and their ecosystems – probably all the more complex ones) much faster than evolution has ever found a way out before. Furthermore, unlike most species, we tend to put all our eggs in one basket, in the name of efficiency, but at the expense of resilience. Homo sapiens is very intelligent, and can do marvels, but seems yet to live up to the sapiens bit.

          Philosophical aside: By the weak anthropic principle, of course “our” quantum world line survived, but maybe more by luck than our good management. My guess is that it was mainly luck. Anyway, we certainly cannot use our survival as evidence that those policies were good ones, and should therefore be continued! The quantum probability amplitude of humanity, and hence the classical probability, may well have been reduce to a sliver in that period! However, on that basis, in reply to your last point, I believe that some continuity with our current civilization _must_ continue forever, however slender and of little import in the scheme if things! Our goal should surely be to thicken the link to the future :-)

  • David Wood says:

    Though I tend to believe Kurzweil’s time line is the correct one for AGI, I don’t think I have ever seen a break down of the calculations he uses to reach his various target dates for landmarks in AGI. Can anyone point me to such a breakdown or has he not made those calculations available?

    • John Newbury says:

      On the contrary, I find Kurzweil’s extrapolations dubious in general, e.g., in his fascinating but flawed book, “The singularity is near”. (This may be the best reference for his evidence and timescales, albeit probably updated since.) Too many are based on extrapolating apparent linearity in log-log graphs (apparently scale-free phenomena), which are well known to be vastly more prone to extrapolation error than even extrapolations of lines in linear scales. All known apparent scale-free real-world phenomena tail off eventually, in each direction. Many of his examples are far from straight lines anyway, often erratic or tailing off at time now, since they cover vastly greater ranges of physics, technology, sociology, or whatever. Furthermore, some data is blatantly subjective, such as the rate of “paradigm shifts”. (As each new technology is explored, even if still exponential growth, the rates may be orders of magnitude different, e.g., when logic chips hit atomic limits and have to explore sub-atomic or more overtly quantum domains, or maybe can only exploit parallelism.) Who knows how many inconvenient contrary indicators were conveniently omitted.

      Is there no danger in global war or global warming, say, because someone has always found the necessary technical and political fixes before, allowing the rest of us to ignore the problems as usual? Dangerous assumption! Is there always growth (let alone exponential and especially double exponential growth) of an economy, population or product? No!

      Even if computation power forever expands at its current rate (perhaps even with a double exponent), sensor and manipulators may have a vastly lower improvement rate (e.g., if their development likewise continues at their present rates, appearing almost linear), thereby becoming the bottleneck to exploring the laws and space of the universe. If not them, then some other weak link will appear (if past experience continues) – one way or another, always postponing even closely approaching Kurzweil’s singularity. Or we may bomb ourselves back to the stone age, then without even the prospect of easy to obtain oil and minerals when ready to use them once more. (If we approach the singularity too quickly, say as fast Kurzweil believes and hopes, rather than immortality, we have may even dramatically speed up our journey into oblivion!)

      • David Wood says:

        John, are you aware of Kurzweil’s specific process for reaching his target dates? I’m not talking about a general process or things like the log-log graphs, I’m talking about the specific mathematical process used to reach specific date ranges for AGI. I’m looking for a comprehensive mathematical formula he is using for that, if any such thing exists. I pretty much know how he derives projections for things like the increase of solar power electrical generation, but not about AGI. Guess I should just email him and ask straight out.

        As to our species’ oblivion, I think we can look at the trajectory of the rise in complexity of biological life as a comforting analogy to be confident of our species continued existence. Did bacteria go extinct when eukaryotes came into being, or when multicellular life began? Did all less intelligent species start going extinct when the first creature with a cerebral cortex emerged? No. Some specific species did get out-competed, but in the current world there are niches for the whole range of life complexity from archaebacteria to human beings.

        I believe that similarly, baseline human intelligence will still exist in the future and there will be a continuum between it and the most advanced non-human or augmented-human intelligences.

        • Brent says:

          In “The Singularity is near” RK goes into the number of neurons in the human brain, number of interconnections between them, the speed of signaling, … and comes up with Calculations Per Second the human brain is capable of. His book is in libraries, …
          I’m sure this all gets updated in his newest book.

          • David Wood says:

            Thanks Brent. I’ve read “The Singularity is Near” a couple of times and am aware of his calculations for arriving at how many calculations per second the human brain is capable of and plotting when computers will achieve that benchmark, but I think Kurzweil would be the first to point out that human level calc/sec does not equal human level ‘intelligence’, which is what I am referring to in my question.

        • John Newbury says:

          David,

          Sorry – I have no more info than in “The Singularity is Near”, which you have evidently read twice as many times as me. :-) In any case, I doubt that any formula could sensibly be used, except perhaps to encapsulate our current, highly unreliable and subjective, beliefs. Or perhaps, like Drake’s equation for the probable number of civilisations in the universe, only useful to help define where our ignorance lies, not for any prediction! I have not seen the formula mentioned in the article.

          Regarding confidence that we will avoid oblivion by analogy with the history of life: a dangerous analogy! First, there is evidence (from an apparent genetic bottleneck, indicating a small population) that mankind was already nearly wiped out in the early days. Neanderthals did, indeed die out, albeit probably only due to competition from H.sapiens. The MAD cold-war philosophy, and Cuban missile crisis in particular, brought us extremely close to extinction.

          Similarly for the analogy with life: It has made many experiments over the aeons, most of them failures. Life hung on, it is true, but many experiments died out without leaving derivative forms. On a count of species, or even larger groups, almost all have died out! Early life produced oxygen, which eventually poisoned all but a very few of the lineages that use that mode of life! Dinosaurs, as a group, had a very good run, but eventually proved to have made the wrong long-term evolutionary choice.

          Maybe intelligence will prove to be, in the long-term (yet vastly shorter than for evolution of the past, e.g., given Kurzweil’s evidence) a bad experiment. Never before did a species have the ability, and hair trigger mechanisms, to destroy all members of its species (and most others besides, and their ecosystems – probably all the more complex ones) much faster than evolution has ever found a way out before. Furthermore, unlike most species, we tend to put all our eggs in one basket, in the name of efficiency, but at the expense of resilience. Homo sapiens is very intelligent, and can do marvels, but seems yet to live up to the sapiens bit.

          Philosophical aside: By the weak anthropic principle, of course “our” quantum world line survived, but maybe more by luck than our good management. My guess is that it was mainly luck. Anyway, we certainly cannot use our survival as evidence that those policies were good ones, and should therefore be continued! The quantum probability amplitude of humanity, and hence the classical probability, may well have been reduce to a sliver in that period! However, on that basis, in reply to your last point, I believe that some continuity with our current civilization _must_ continue forever, however slender and of little import in the scheme if things! Our goal should surely be to thicken the link to the future :-)

  • Cru says:

    Every critic of Kurzweil seems to be so wrapped up in emotion that they lose objectivity. It’s as if they’ve got something at stake to lose if artificial intelligence or indefinite lifespans come about.. perhaps credibility.

    • David Wood says:

      I agree, but I wouldn’t say every critic. I don’t think Kurzweil would say that either. I do think that a lot of critique against Kurzweil and his predictions are born out of fear for the future of themselves and humanity and thus selective biases sneak in to prove why Kurzweil isn’t right because if he was they would think it would be an awful future.

    • John Newbury says:

      Cru,

      I, for one, do not match your description of Kurzweil’s critics (one being sufficient to refute your assertion. :-)) As illustrated by my posts, above, I am trying to insert some objectively here, mostly indicating when only my opinion, if not obvious. My stance, in summary, is that too many people, especially those who should know better, assume that we can predict better than we can. (There is plenty of objective evidence that strongly supports my view, especially in the open-ended systems involved.)

      As far as emotional preference is concerned, I would love RK to be right, and possibly have immortality (and more rationality) before too late for me. To paraphrase a smarmy comment by the newsreader in the Simpsons: I, for one, welcome our new robotic overlords. :-)

    • Afterthought says:

      The emotion is that which is invested in his guru status.

      The critics are simply tired of his repetitive and unchallenged statements.

  • Cru says:

    Every critic of Kurzweil seems to be so wrapped up in emotion that they lose objectivity. It’s as if they’ve got something at stake to lose if artificial intelligence or indefinite lifespans come about.. perhaps credibility.

    • David Wood says:

      I agree, but I wouldn’t say every critic. I don’t think Kurzweil would say that either. I do think that a lot of critique against Kurzweil and his predictions are born out of fear for the future of themselves and humanity and thus selective biases sneak in to prove why Kurzweil isn’t right because if he was they would think it would be an awful future.

    • John Newbury says:

      Cru,

      I, for one, do not match your description of Kurzweil’s critics (one being sufficient to refute your assertion. :-)) As illustrated by my posts, above, I am trying to insert some objectively here, mostly indicating when only my opinion, if not obvious. My stance, in summary, is that too many people, especially those who should know better, assume that we can predict better than we can. (There is plenty of objective evidence that strongly supports my view, especially in the open-ended systems involved.)

      As far as emotional preference is concerned, I would love RK to be right, and possibly have immortality (and more rationality) before too late for me. To paraphrase a smarmy comment by the newsreader in the Simpsons: I, for one, welcome our new robotic overlords. :-)

    • Afterthought says:

      The emotion is that which is invested in his guru status.

      The critics are simply tired of his repetitive and unchallenged statements.

  • neurotruth says:

    Kurzweil is a celebrity, not a neuroscientist, and in fact, in his book and elsewhere, he evinces a clear ignorance of neuroscience and neuroanatomy. So anything he says relating to neuroscience, including predictions concerning reverse engineering the brain, lose all credibility with me. The only people who actually buy into his nonsense are those who don’t know any better or who are not very rigorous in their thinking.

    Btw, I’m not a skeptic. We will reverse engineer the brain in the not so distant future. But cheerleaders like Kurzweil, who lack depth of understanding in neuroscience, make a mockery of the whole effort. His whole function is seemingly to exploit the gullible and stroke his ego. He’s pretty pathetic if you ask me, and in the long term, irrelevant.

    just a neuroscientists 2 cents.

    • David Wood says:

      Have you ever considered that people like Kurzweil help to generate interest in such things as neuroscience, information technology, genetics, nanotechnology and other relevant fields?

      Furthermore, that this interest has the effect of increasing both private and public research in these fields?

      Who employs you? I would imagine that we can trace at least some of your research funds to public money. You may scoff at Kurzweil, but what he’s doing with the Singularity Summit, his books, his foundations, kurzweilai.net will result in more funds for crucial research in the fields you say he lacks depth in.

      I will add that you seem to be determining his depth of knowledge from what he presents in non-technical books and lectures to general audiences, which would be a poor means of determination and hardly what I would call objective or scientific.

      Why don’t you just email him with specific objections and questions concerning his conclusions and see how he responds.

  • neurotruth says:

    Kurzweil is a celebrity, not a neuroscientist, and in fact, in his book and elsewhere, he evinces a clear ignorance of neuroscience and neuroanatomy. So anything he says relating to neuroscience, including predictions concerning reverse engineering the brain, lose all credibility with me. The only people who actually buy into his nonsense are those who don’t know any better or who are not very rigorous in their thinking.

    Btw, I’m not a skeptic. We will reverse engineer the brain in the not so distant future. But cheerleaders like Kurzweil, who lack depth of understanding in neuroscience, make a mockery of the whole effort. His whole function is seemingly to exploit the gullible and stroke his ego. He’s pretty pathetic if you ask me, and in the long term, irrelevant.

    just a neuroscientists 2 cents.

    • David Wood says:

      Have you ever considered that people like Kurzweil help to generate interest in such things as neuroscience, information technology, genetics, nanotechnology and other relevant fields?

      Furthermore, that this interest has the effect of increasing both private and public research in these fields?

      Who employs you? I would imagine that we can trace at least some of your research funds to public money. You may scoff at Kurzweil, but what he’s doing with the Singularity Summit, his books, his foundations, kurzweilai.net will result in more funds for crucial research in the fields you say he lacks depth in.

      I will add that you seem to be determining his depth of knowledge from what he presents in non-technical books and lectures to general audiences, which would be a poor means of determination and hardly what I would call objective or scientific.

      Why don’t you just email him with specific objections and questions concerning his conclusions and see how he responds.

  • Tom says:

    Reverse engineering the brain and its function, the mind, is what cognitive scientists do.

    I find Kurzweil, Myers and, more excusably, the author of this article, sadly under-informed about the advances of cognitive science.

    To answer the seven bulleted questions:

    1. Yes.
    2. Yes, for both human and machine. Though a single number like IQ isn’t a complete description, it is still possible to measure it objectively. More complex descriptions of different types of intelligence can also be measured objectively.
    3. Ill posed question. Too many different meanings for the word consciousness. There is a meaning that practically has the impossibility of a test as part of its definition, but even then we can reason about whether we expect it to exist. the impossibility of a test means it can’t be relevant to our survival, so has arisen by accident — albeit a happy one. I am not convinced that all humans have it, but I am convinced, on probabilistic grounds, that I’m not the only one. It is not as easy to reason in this way about a mind that doesn’t share a common origin with ours. I am not sure why the author has raised this issue, as this type of consciousness is neither necessary nor sufficient for intelligence.
    4. Part 1: Don’t know. Who cares? Part 2: Don’t know. Who cares? Part 3: For an emulation, about 1.5 times the processing power of the original mind. For a simulation, a fraction of the processing power of my desktop will do, but a better understanding of cognitive science is required.
    5. & 6. No. The human mind is not a general learning machine.
    7. Yes.

  • Tom says:

    Reverse engineering the brain and its function, the mind, is what cognitive scientists do.

    I find Kurzweil, Myers and, more excusably, the author of this article, sadly under-informed about the advances of cognitive science.

    To answer the seven bulleted questions:

    1. Yes.
    2. Yes, for both human and machine. Though a single number like IQ isn’t a complete description, it is still possible to measure it objectively. More complex descriptions of different types of intelligence can also be measured objectively.
    3. Ill posed question. Too many different meanings for the word consciousness. There is a meaning that practically has the impossibility of a test as part of its definition, but even then we can reason about whether we expect it to exist. the impossibility of a test means it can’t be relevant to our survival, so has arisen by accident — albeit a happy one. I am not convinced that all humans have it, but I am convinced, on probabilistic grounds, that I’m not the only one. It is not as easy to reason in this way about a mind that doesn’t share a common origin with ours. I am not sure why the author has raised this issue, as this type of consciousness is neither necessary nor sufficient for intelligence.
    4. Part 1: Don’t know. Who cares? Part 2: Don’t know. Who cares? Part 3: For an emulation, about 1.5 times the processing power of the original mind. For a simulation, a fraction of the processing power of my desktop will do, but a better understanding of cognitive science is required.
    5. & 6. No. The human mind is not a general learning machine.
    7. Yes.

  • quoter says:

    “We will succeed in reverse engineering the human brain by the 2020s.”

    – Ray Kurzweil, TED Talks, uploaded January 2007.

    http://www.youtube.com/watch?v=IfbOyw3CT6A
    (18:40 to 18:55)

  • quoter says:

    “We will succeed in reverse engineering the human brain by the 2020s.”

    – Ray Kurzweil, TED Talks, uploaded January 2007.

    http://www.youtube.com/watch?v=IfbOyw3CT6A
    (18:40 to 18:55)

  • carl vilbrandt says:

    Missing is from the discussion and approach to AI is the brain is only a very small part of a complex system and an even more complex natural environment. A basic general modeling system for dynamic complex volumetric objects that are equal to that of natural objects. A virtual space as complex as or an extension of the natural world. Its the internal map of the natural world we all create. When we can model at that level of complexity AI will emerge. A brain and a virtual map is needed for AI.

  • carl vilbrandt says:

    Missing is from the discussion and approach to AI is the brain is only a very small part of a complex system and an even more complex natural environment. A basic general modeling system for dynamic complex volumetric objects that are equal to that of natural objects. A virtual space as complex as or an extension of the natural world. Its the internal map of the natural world we all create. When we can model at that level of complexity AI will emerge. A brain and a virtual map is needed for AI.

  • Tarwin Stroh-Spijer says:

    If you’re looking for a way to judge consciousness why not do what we do every day? How willing are you to pull the plug?

  • Tarwin Stroh-Spijer says:

    If you’re looking for a way to judge consciousness why not do what we do every day? How willing are you to pull the plug?

  • Cracked LCD says:

    I somehow doubt that PZ will ever admit that his critique was based on second hand sources quoting Kurzweil out of context.

  • Cracked LCD says:

    I somehow doubt that PZ will ever admit that his critique was based on second hand sources quoting Kurzweil out of context.

  • MK says:

    It is important to precisely define what kind of AI we need and what for. All these discussions include vague terms such as consciousness, intelligence etc. At first, we need AI that is able to make lives better and free the enslaved world.
    I refuse to dream about friend-like robots for spoiled rich kids while 2/3 of the world are living like underfed pack mules. Any AI that would release that pressure from our shoulders is the AI we need the most at the moment. And that IS achievable. More concerning problem is our own civilization inertia, finance/politics and slave (robot-like) mentality of most of worlds people.
    Phrase “there is no free lunch” has to be forgotten if we want to be ready for this next phase. Afterwards, all kinds of AI will emerge without problems.

  • MK says:

    It is important to precisely define what kind of AI we need and what for. All these discussions include vague terms such as consciousness, intelligence etc. At first, we need AI that is able to make lives better and free the enslaved world.
    I refuse to dream about friend-like robots for spoiled rich kids while 2/3 of the world are living like underfed pack mules. Any AI that would release that pressure from our shoulders is the AI we need the most at the moment. And that IS achievable. More concerning problem is our own civilization inertia, finance/politics and slave (robot-like) mentality of most of worlds people.
    Phrase “there is no free lunch” has to be forgotten if we want to be ready for this next phase. Afterwards, all kinds of AI will emerge without problems.

  • Richard says:

    i Understand,but iam not saying anything because Kurzweil would say that in any mood.in fact iam in computers business bu iam not agree with u..!

  • Richard says:

    i Understand,but iam not saying anything because Kurzweil would say that in any mood.in fact iam in computers business bu iam not agree with u..!

  • Tess Malone says:

    I’d like a more comprehensive explanation as to how “intelligence” would be defined. For example, if it means the same sort of intelligence that has people in 2010 thinking there are gods, angels, aliens, demons, spirits, guides, ghosts, miracle, preordained meaning and purpose, ancient entities with mysterious plans, and some sort of self appointed moral dictate to outlaw human sexuality as the highest sin of man, then I think finding the proper model to simulate that sort of intelligence should be pretty easy to come by in your nearest box of rocks.

  • bob348 says:

    “Can the principles of operation for the brain be divorced from its architecture?” Of course why couldn’t they.

    “That is, can we build a program that thinks like a human brain but does not need to mimic the cell biology that the brain uses?” Yes why couldn’t we once we understand the brain?

    “Is it possible to build an objective measure for level of intelligence, either human or nonhuman? Can we say that X program or Y person is Z more intelligent than another?” Yes many of them and different computers, or people or animals would score differently on each. The question is how do define “intelligence”. You can’t measure it until you’ve defined what “it” is. All this liberal nonsense about IQ vs. EQ and the like is a good example.

    “Can we test for consciousness? (Kurzweil has stated that he believes the answer is probably not – Turing Tests may be able to measure the believability of an alleged consciousness but not the consciousness itself).?” No. How could you? Can you prove to me that you are conscious? How?

    “How much processing power will we really need to simulate the human brain at the neuron level? At the molecular level? As a mind?” The vote is out on that but as Kurzweil gets into in his book “How to Build a Mind” the answer may very well be less than you thin, once we understand the human brain better. There may be no point in modeling it down to the neuron or melecular level in order to reach the end goal. If a time traveler came back from the future and told us exactly what those “million lines of code” are that you reference above we might even be able to do it on today’s super computers… I feel like software lags a long way behind hardware. Your desktop of today is probably 500 to 1000 times more powerful than the one you had 10 years ago. But windows seems to me like it’s only 5 – 10 times, faster/better/more functional/etc.

    “Will we develop artificial intelligence by creating an artificial brain and teaching it to be intelligent?” If you had an AI that truly achieved “human level intelligence” wouldn’t it be able to learn and teach itself from a starting point of almost no knowledge? Isn’t that a requirement to develop human level AI in the first place?

    “Will we develop artificial intelligence by creating simple learning machines and teaching them to be smarter?” I think they will teach themselves to be smarter.

    “Will we develop artificial intelligence at all?” I think Kurzweil is right that it’s a forgone conclusion, it’s just a matter of when. As guy that has been proven right over and over again when he makes long term predictions about the state of technology I think he will probably be right on the money with his prediction of late 2020’s or early 2030’s for this.

    Go back and read “The Age of Intelligent Machines” and then “The Age of Spiritual Machines” and see just how shocking it is how much he got right at the same time many other experts were saying “that’s impossible, it will never happen”.

    One thing is for sure – the future is a scary place with all these intelligent machines.

Singularity Hub Newsletter

Close