Exclusive: Ray Kurzweil Interview – The Future of Man And Machine

11,665 11 Loading

Singularity Hub's Keith Kleiner Interviews Ray Kurzweil

I am pleased to announce the release of Singularity Hub's recent interview with Ray Kurzweil below.  The interview with Kurzweil focused at first on his new book to be released November 13 "How to Create a Mind", but then in the second half we moved on to broader topics, such as Kurzweil's more general thoughts on the future of man and machine, and Kurzweil's personal goals for his work.  The interview features a closeup, raw style: driving in a car at night with the lights of San Francisco behind us, en route to a presentation Kurzweil was to make at Singularity University.  I hope you feel as I do that the interview succeeds in moving away from the boring old studio setting.  Singularity Hub members are given a special treat, as we have released a member-only version of the interview that features extra footage.  If you are not a Singularity Hub member already, please consider joining now.

Computers are getting smarter and more capable everyday, while at the same time researchers are making progress in unraveling the mysteries of the human brain.  Where is all of this taking us? Will we soon create artificial intelligences that meet or even exceed human intelligence?  In the interview Kurzweil reveals not only his belief that machines will match human ability by 2029, but increasingly he feels his prediction to be conservative!

Personally, I felt the second half of the interview was more interesting.  This is when we focused less on the book, and more on general thoughts and discussion. Are we the same person we were 6 months ago, given that most of the matter inside our bodies has been changed during that period?  Who are we when parts of our brains are in our heads, but other parts are in the cloud?  As discussed in the interview, questions such as these used to be philosophical conquests for curiosity's sake, but soon they may become real life dilemmas.  As technology continues to advance, these questions will become more pressing, more relevant to us all.  Stay tuned for future interviews with Kurzweil and other great minds as Singularity Hub continues our quest to explore the future of humankind.

And now, without further delay...the video:


Discussion — 11 Responses

  • dobermanmacleod October 30, 2012 on 2:35 pm

    Emotional integration to AI is non-sense, but mission orientation (will) is deadly serious. To better understand AI/human integration, consider the Buddhist concept of mind. What happens to your fist when you open your hand? Who are the characters inside your dreams? When I look into your eyes I see myself. Those who are most mentally nimble will be the pioneers of AI/human integration. I am a trans-humanist. “Deep stuff.”

    • Michael Davidson dobermanmacleod October 31, 2012 on 12:06 am

      I disagree with you, as does Kurzweil. Emotions will be integral to AGI for the same reason it’s integral to general intelligence in animals. I’m not saying the emotions of an AGI will be the same as a human’s, however, any more than the intelligence will be.

      • dobermanmacleod Michael Davidson October 31, 2012 on 2:57 am

        You are right, Kruzweil and you agree that “emotions will be integral to AGI” in terms of “general intelligence.” In other words, you are both using anthropocentric evaluations of IQ similar to the way a human is evaluated. A human can be a idiot savant, or have a low EQ (i.e. Emotional Quotient), and they will be considered less than a genius. What I am saying (which is different), is that functionally, in terms of IQ (i.e. Intellectual Quotient), or the ability to fulfill a mission (i.e. not pass the Turing test, but instead to accomplish an intellectual task like passing a standardized IQ test or plotting the best strategy to win a game).

        In fact, Kurzweil does briefly touch on this dichotomy (i.e. IQ vs. “general intelligence”) in that interview. By the way, there is a new program called “Mind’s Eye” ( http://www.forbes.com/sites/reuvencohen/2012/10/29/u-s-army-sponsored-arti%EF%AC%81cial-intelligence-surveillance-system-attempts-to-predict-the-future/ ) that is exactly what I am talking about. It can observe, and then “predict what a person is likely to do in the future.” It is fair to say that program has not been programmed with emotions, but never the less, it is “smart.”

        I will repeat: emotions integration in AGI is nonsense, is simply anthropocentric baggage, and is not necessary for mission orientation in strong AI (unless that mission is to chummy up to humans who want a buddy they can relate to and pretend it is their “friend.”). I have three dobermans, and it is easy to attribute human traits to them – but you ought never to make that mistake because that can be dangerous and counter-productive like any schizophrenic notion.

  • jmacdonald October 31, 2012 on 12:19 am

    I’m more with the thinking of ‘dobermanmacleod’ the first commentator on this thread. My thoughts on the emotional AI part is here: http://thejonathanmacdonald.blogspot.co.uk/2011/07/putting-emotion-into-artificial_18.html

    Sadly I fear left brain hard science is unsuited in every way to tackle right brain soft science realities. Oh the chariots.

  • starnois October 31, 2012 on 9:48 am

    Good interview, but why are you guys in a car?

  • Singularity man October 31, 2012 on 1:29 pm

    Am I imagining, or are they both sitting in a driving car without putting a safety belt ???

    How can someone that want to live forever (and makes so much efforts to achieve it) be so irresponsible regarding to his life on the road ? are there no car accidents in the USA ?

    What the law in the USA says about sitting in a car without a safety belt ?? is it legal ?

    I’m really surprised.

  • Andrew Atkin October 31, 2012 on 9:59 pm

    Okay, so we all directly hook up our brains to cyberspace, so as to increase our mental space and become super brainy smart…and solve all the tough problems?

    Problem is that’s not how intelligence works. You have to go from information to *workable* information. You have to break down what you learn, and down into a format that empowers your intellectual functioning.

    Hooking people’s brains up to the internet may do the opposite of what Ray Kurzweil has suggested. It may induce people to spend all their time swallowing information rather than processing it; making them, maybe, general knowledge wizards that can’t solve problems to save themselves.

    Real education happens when you both work with and think about the knowledge you obtain.

    • dobermanmacleod Andrew Atkin October 31, 2012 on 10:43 pm

      Hooking our brains into cyberspace can help by providing a repository, a social networking platform, and a social collaboration platform. In other words, education defined as “swallowing and processing information” is an individualistic perspective (referring to your post), whereas if we “hook our brains to cyberspace” then we are joining a community. BTW, in that interview Kurzweil talks about the hierarchy of information processing (the neural arrangement) of the frontal lobe. That “single algorithm” can be expressed both individually and in a community setting (and in a software program). Information and conceptual integration is the goal, and “hooking our brains into cyberspace” is simply a tool to accomplish that better. Finding process of finding that needle in the haystack is both analytical and synthetic.

      • Andrew Atkin dobermanmacleod November 1, 2012 on 3:05 am

        I am not saying it does not have value – I’m making a point on it’s limits and possible [developmental] costs. It all depends on what you are trying to achieve.

  • Stefano Vaj November 2, 2012 on 10:56 am

    In the meantime, those interested in the state-of-the art of the “theoretical” angle, and what it allows us to say *as of now* about AI, may want to read the short essay “Artificious Intelligences” which is online at:

  • vovietanh September 24, 2013 on 8:15 pm

    I don’t like the way of thinking that only influential people are correct about answers to philosophical questions like “Are we the same person we were 6 months ago, given […] that period? Who are we when […] in the cloud?” or that we should ask them at all. They are meant to be answered yourself. Just use your own brain and everything will be clear. Use common sense. Of course we are changing second by second and not the same even one second later. It depends of one’s subjective choice to a bottomline to decide if an human object is still the “same” any more. For example, 60% is my bottomline for John to remain “the same” after 6 months. If 60% of the matter inside his body remain the same, he passes. If not, I don’t call him John anymore. The same goes to when you are wired in a cloud: you become predictable; your degree of randomness/freedom drastically reduces. Now you thinking is influenced by others’ and a hacker can access or even manipulate your brain. Before that, randomness was part of “you”; now it’s not; you’re not fully the past you anymore. Crystal clear.
    The trickier question would be what if you are duplicated (you know, 3D-printed after getting a scan). You and that copy will start out the same way your twin children start out inside the worm – “you two” are the same for now but will eventually diverge. (This can make a Hollywood movie)