Video of Kurzweil’s Latest Talk at Google

8 12 Loading

kurzweil-at-googleIt’s appropriate that a man who often talks about the democratizing effects of technology can be seen for free on the Internet. Ray Kurzweil visited Google campus in Mountain View in July as part of the Authors at Google series. His hour long presentation is available to watch after the break. For those of you who have never had a chance to see Kurzweil speak about the Singularity and what it will mean for humanity, this is a great opportunity to see what all the fuss is about.

How do I summarize Kurzweil’s thesis? Predictable exponential growth. While humans tend to think of their world on a linear scale (where’s that baseball going to be in two seconds? …better duck!), information technology develops at an exponential pace. This leads us to misunderstand how large trends develop: the Cold War seemed like it would go on forever, the popularity of the Internet seems to come out of nowhere, etc. Kurzweil predicted these developments, and many others, and wants to talk to you about the future. It’s coming a lot sooner than you think: we could have machines with human level intelligence by 2029.

For those of you who have watched Transcendent Man, or read the Singularity is Near, you may want to skip the first half of the Google presentation. There’s not a lot that is new there. Kurzweil’s latest book, The Web Within Us: When Minds and Machines Become One, is mentioned but only in passing.

If I could offer a highlight reel:

  • Skip ahead to around 29:22, Kurzweil demonstrates his handheld mobile reader, part of knfb Reading Technology. With just a single picture, the tiny device can read a page of text in any of 16 languages. Very cool.
  • 33:03 Kurzweil addresses complaints that his exponential graphs don’t include data points that would offset his proposed exponential growth curve. Here we see a wide collection of points added in to his graph without a huge variance. He goes on to explain how individual paradigms may end, but larger trends continue steadily and support his claims.
  • Around 35:00 – Many different fields are becoming information technology. Medicine (with DNA), art, literature, and even physical objects may one day be transferred through emails.
  • 44:00 – A discussion on nanotechnology and respirocytes.
  • Artificial intelligence is a big theme when discussing the Singularity. Kurzweil explores the strengths and weaknesses of the human brain starting around 46:50. Humans, he says, are very good at pattern recognition and hierarchical structure. In the next 20 years, brain scans into the neocortex and cerebellum will enhance our understanding of how we accomplish these feats.
  • There’s a great translation demonstration at 54:00 that leads directly into the presentation summary at 54:35.

All of the presentation slides are available as a download by following this link.

While many of the concepts Ray Kurzweil discussed at Google have been repeated in other settings, this presentation does a great job of summarizing. As always, I’m impressed by his track record, but not completely willing to accept his future predictions. That’s a sentiment shared in some of the recent and upcoming movies about the Singularity. Maybe it’s just our linear brains rejecting the concept of exponential growth? I’d like to hear more about how you view Kurzweil’s futurist thoughts, so make sure to add a comment below.

Discussion — 12 Responses

  • Simon Dufour September 10, 2009 on 7:10 pm

    I recently started to think about that. I always felt like technology was accelerating. While some of Kurzweil graph are pretty far fetched, some actually represent reality.

    Personally, I really adhere to Kurzweil line of thinking. We should not add too much bounds to our thinking right now because they’ll act just like blinders on an horse.

    The basic idea is that at some point, we’ll be able to eliminate suffering completly. People could be living in simulated reality while their body is being maintained all the time to make them live almost forever. Suffering would end and everybody would live happily ever after. Isn’t it the absolute desire of mankind to live happily eternally? Having only good days, having exactly what you want ALL the time, freely.

    Sure there is probably some evil that could get out of it but I have trouble seeing who would perpetuate evil if they could do it freely and without any consequences in a virtual reality.

  • Simon Dufour September 10, 2009 on 3:10 pm

    I recently started to think about that. I always felt like technology was accelerating. While some of Kurzweil graph are pretty far fetched, some actually represent reality.

    Personally, I really adhere to Kurzweil line of thinking. We should not add too much bounds to our thinking right now because they’ll act just like blinders on an horse.

    The basic idea is that at some point, we’ll be able to eliminate suffering completly. People could be living in simulated reality while their body is being maintained all the time to make them live almost forever. Suffering would end and everybody would live happily ever after. Isn’t it the absolute desire of mankind to live happily eternally? Having only good days, having exactly what you want ALL the time, freely.

    Sure there is probably some evil that could get out of it but I have trouble seeing who would perpetuate evil if they could do it freely and without any consequences in a virtual reality.

  • Elliot Temple September 11, 2009 on 12:10 am

    Isn’t his basic thesis historicist?

    Has he ever addressed Karl Popper’s arguments in _The Poverty of Historicism_?

    • than Elliot Temple September 11, 2009 on 4:15 am

      I think most technology people don’t care about philosophy. Why should we? Want to know the nature of something? Go look; don’t guess.

      To me, it’s a rather far reaching and optimistic engineering road map. Organizations use them all the time and no one necessarily sees them as morally bankrupt since they work when used correctly.

  • Elliot Temple September 10, 2009 on 8:10 pm

    Isn’t his basic thesis historicist?

    Has he ever addressed Karl Popper’s arguments in _The Poverty of Historicism_?

    • than Elliot Temple September 11, 2009 on 12:15 am

      I think most technology people don’t care about philosophy. Why should we? Want to know the nature of something? Go look; don’t guess.

      To me, it’s a rather far reaching and optimistic engineering road map. Organizations use them all the time and no one necessarily sees them as morally bankrupt since they work when used correctly.

  • popay September 11, 2009 on 11:02 am

    I think Kurzweil is missing an important factor. Every growth is based on a driving force, in the case of science and technology the driving force is the capacity and capability of our brain to invent new concepts and principles. And that has a limit. When Kurzweil makes his argument about the exponential development of science and technology he essentially plots two types of graphs, long term graphs spanning centuries/millenia and short term graphs spanning the last 5-6 decades. His long term graphs depict the development of new principles in science which allow the development of technologies to radically change the world, e.g. quantum mechanics paved the way for todays silicon industry and ultimately allowed the coming of the information age. His short term graphs on the other hand plot simply improvement of a technology, e.g. Moor’s law about the number of transistors packed on a unit chip area. If we look at the last 50-60 years, there hasn’t been much real breakthroughs in science, progress has mostly been just improvement of technology. Most of our current technology is actually based on scientific principles developed during the 18,19 and first half of 20th century, and without scientific breakthroughs technology improvement always hits a wall, e.g. human space flight is virtually at a stand still due to a lack of development of new propulsion principles even though the rest of the space technology has improved dramatically. The same applies to AI, since the 1970ies not much has happened in terms of new principles. Modern AI achievements such as speech recognition and OCR are actually just technology improvements based on information theory developed much earlier. The reason for this lack of development might simply be the inherent limit and incapacity of our brain to grasp such advanced concepts, e.g. our brain has been very well suited by evolution to deal with 3D space, thinking in higher dimensional spaces however is much more problematic. The bottom line is that the human brain may simply be not smart enough to develop AI. We may need to enhance our brains first, but to do that we will need to first understand how the brain works, that is how AI works, which presents a loop, and the only way to break the loop would then be through evolution, and that is known to take thousands if not millions of years. Without AI there will be no singularity. Kurzweil always points out that a computer with the processing power of the human brain will be developed in the next couple of decades, but will the intelligent software to run on that computer be developed any time soon? Even today desktop computers pack significant computational power and still we cannot solve robustly enough problems such as stereo vision and speech recognition, which occupy only small parts of the brain, and scientists have been working on these problems for tens of years. So, it seems we may be heading for a technological and scientific stagnation. Finally, I would like to post a question to Ray Kurzweil if he happens to read this comment: Is he working himself on the development of strong AI? Because if I believed in the possibility and actually predicted its emergence in just a couple of decades, and had his resources, I would be working day and night on making it happen.

  • popay September 11, 2009 on 7:02 am

    I think Kurzweil is missing an important factor. Every growth is based on a driving force, in the case of science and technology the driving force is the capacity and capability of our brain to invent new concepts and principles. And that has a limit. When Kurzweil makes his argument about the exponential development of science and technology he essentially plots two types of graphs, long term graphs spanning centuries/millenia and short term graphs spanning the last 5-6 decades. His long term graphs depict the development of new principles in science which allow the development of technologies to radically change the world, e.g. quantum mechanics paved the way for todays silicon industry and ultimately allowed the coming of the information age. His short term graphs on the other hand plot simply improvement of a technology, e.g. Moor’s law about the number of transistors packed on a unit chip area. If we look at the last 50-60 years, there hasn’t been much real breakthroughs in science, progress has mostly been just improvement of technology. Most of our current technology is actually based on scientific principles developed during the 18,19 and first half of 20th century, and without scientific breakthroughs technology improvement always hits a wall, e.g. human space flight is virtually at a stand still due to a lack of development of new propulsion principles even though the rest of the space technology has improved dramatically. The same applies to AI, since the 1970ies not much has happened in terms of new principles. Modern AI achievements such as speech recognition and OCR are actually just technology improvements based on information theory developed much earlier. The reason for this lack of development might simply be the inherent limit and incapacity of our brain to grasp such advanced concepts, e.g. our brain has been very well suited by evolution to deal with 3D space, thinking in higher dimensional spaces however is much more problematic. The bottom line is that the human brain may simply be not smart enough to develop AI. We may need to enhance our brains first, but to do that we will need to first understand how the brain works, that is how AI works, which presents a loop, and the only way to break the loop would then be through evolution, and that is known to take thousands if not millions of years. Without AI there will be no singularity. Kurzweil always points out that a computer with the processing power of the human brain will be developed in the next couple of decades, but will the intelligent software to run on that computer be developed any time soon? Even today desktop computers pack significant computational power and still we cannot solve robustly enough problems such as stereo vision and speech recognition, which occupy only small parts of the brain, and scientists have been working on these problems for tens of years. So, it seems we may be heading for a technological and scientific stagnation. Finally, I would like to post a question to Ray Kurzweil if he happens to read this comment: Is he working himself on the development of strong AI? Because if I believed in the possibility and actually predicted its emergence in just a couple of decades, and had his resources, I would be working day and night on making it happen.

  • Chris Riley September 11, 2009 on 3:07 pm

    The term AI is extremely over used and means nothing. If you are a technologies you should steer from this term as anymore it’s only a marketing term.

    For example in the space that Ray created OCR people love to throw around that OCR uses AI. But it’s not intelligent, it’s actually a pretty basic analysis of images. But something such as simulated annealing and genetic algorithms posses way more intelligence.

    It’s really hard to talk about the future when your point of reference is vague terms such as “AI”.

    What SIAI the singularity institute is dealing with is a far more advanced potential technologies that truely approximate intelligence, vs. a really good way to solve a basic problem.

  • Chris Riley September 11, 2009 on 11:07 am

    The term AI is extremely over used and means nothing. If you are a technologies you should steer from this term as anymore it’s only a marketing term.

    For example in the space that Ray created OCR people love to throw around that OCR uses AI. But it’s not intelligent, it’s actually a pretty basic analysis of images. But something such as simulated annealing and genetic algorithms posses way more intelligence.

    It’s really hard to talk about the future when your point of reference is vague terms such as “AI”.

    What SIAI the singularity institute is dealing with is a far more advanced potential technologies that truely approximate intelligence, vs. a really good way to solve a basic problem.

  • gideon September 11, 2009 on 4:50 pm

    AI will come about even if we couldn’t understand how to code it directly ourselves because if there is some kind of combination of software code that can become an AI it will be found randomly, just like any other evolutionary process. We all want it right now but that may not happen and the thought frustrates people but if you look at the big picture given hundreds or thousands of years I don’t see why that magical combination of code wouldn’t be generated eventually given all those processors that continue to build up on the planets surface (brute force methods where every combination that could be run will be tried – similar experiments have been tried but with smaller pieces of code and when the technology gets even more dense whole programs can be tried this way – it’ll be just a matter of weeding out the failed trials). Maybe we just have to try everything wrong and the monkeys will bang out Shakespear’s brain eventually. In the meantime we might do well enough with automation even if it won’t do our thinking for us.

  • gideon September 11, 2009 on 12:50 pm

    AI will come about even if we couldn’t understand how to code it directly ourselves because if there is some kind of combination of software code that can become an AI it will be found randomly, just like any other evolutionary process. We all want it right now but that may not happen and the thought frustrates people but if you look at the big picture given hundreds or thousands of years I don’t see why that magical combination of code wouldn’t be generated eventually given all those processors that continue to build up on the planets surface (brute force methods where every combination that could be run will be tried – similar experiments have been tried but with smaller pieces of code and when the technology gets even more dense whole programs can be tried this way – it’ll be just a matter of weeding out the failed trials). Maybe we just have to try everything wrong and the monkeys will bang out Shakespear’s brain eventually. In the meantime we might do well enough with automation even if it won’t do our thinking for us.