Leading Neuroscientist Says Kurzweil Singularity Prediction A “Bunch Of Hot Air”

396 165 Loading

Neuroscientist Miguel Nicolelis, seen here on the Daily Show, rejects predictions of a technological Singularity. [Source: Comedy Central]

Duke neuroscientist Miguel Nicolelis made it clear at this year’s American Association for the Advancement of Science meeting: he is not a Singularitarian. Addressing fellow scientists, he dismissed the singularity as “a bunch of hot air,” and went on further to declare that “the brain is not computable and no engineering can reproduce it.”

Ray Kurzweil, no doubt, couldn’t disagree more. You know, the guy who’s last book was entitled “How To Create a Mind”?

But Nicolelis isn’t backing down from critics. A very lively Twitter discussion took place in the days after he made the comments. “How in heavens do you simulate something you have no algorithm for?” went one Tweet. “…we would not be talking about consciousness. Our brain is ‘copy-write’ protected by its own evolutionary history!” went another. And the most damning hurl in the direction of Singularitarians: “Fallacy is what people are selling: that human nature can be reduced to [something] that [a] computer algorithm can run! This is a new church!”

I had to use Google Translate to translate the comments from Portuguese, but they seem to be accurate translations given his argument at the meeting.

Describing his new Pattern Recognition Theory of Mind (PRTM) during a Singularity Hub drive along interview last October, Ray Kurzweil voiced an opinion that couldn’t be more different from Nicolelis’. “We now have enough evidence to support a particular theory, …a uniform theory about how the neocortex works. And it’s basically comprised of 300 million pattern recognizers. Most important they can wire themselves in hierarchies to other pattern recognizers. The world is inherently hierarchical and the neocortex allows us to understand it in that hierarchical fashion.”

Kurzweil’s had his turn on the Comedy Central news network, seen here on the Colbert Report. [Source: Comedy Central]

That is to say, Kurzweil thinks there is a certain simplicity to the structure of the neocortex, the part of the brain where the most complex human mental activities take place, that lends itself to being reproduced – by 2029, he famously predicts.

But Nicolelis isn’t buying it. He thinks the brain/neocortex is much more than a hierarchy of pattern recognizers, and it’s that complexity that futurists like Kurzweil underestimate. “You can’t predict whether the stock market will go up or down because you can’t compute it,” he said at the AAAS conference, MIT Technology Review reports. “You could have all the computer chips ever in the world and you won’t create a consciousness.”

Even if he thinks human thought won’t one day be recreated with silicon, Nicolelis certainly believes it will be augmented by it. A leading researcher in brain-computer interface technologies, he presented such an apparatus at the AAAS meeting: a device that enabled rats to detect infrared light. They did this by mounting on the rats’ heads infrared sensors that were connected to stimulating electrodes implanted in the brain. Infrared signals were translated to a stimulation pattern in the area of the brain that processes touch sensations, the somatosensory cortex. In this way they created a “sensory neuroprosthesis.” He hopes that these sorts of devices may one day “serve to expand natural perceptual capabilities in mammals.”

Of course, Nicolelis is not the first to suggest Kurzweil and his Singularitarian followers could use more hard science and less wishful thinking in their predictions. In fact, New York University psychology professor Gary Marcus wrote a scathing condemnation of Kurzweil’s “How To Create A Mind” in the New Yorker: “Kurzweil’s pointers to neuroanatomy serve more as razzle-dazzle than real evidence for his theory” is the take-home message.

As he always does, Kurzweil is sure to fire back at his critics. And as the newly appointed Director of Engineering at Google, where his explicit mission is to create an artificial intelligence that will “make all of us smarter,” he’s certainly got the money to put where his mouth is. He’s teaming up with Google to create the most sophisticated AI assistant the world has ever seen. It might not be the brain that many of his fans are waiting for, but it could very well help the cause. Because it’s already 2013, and 2029 will be here before you know it.

Discussion — 165 Responses

  • rudyilis March 10, 2013 on 12:33 pm

    “You can’t predict whether the stock market will go up or down because you can’t compute it.”

    But we can simulate the stock market with virtual buyers and sellers. The simulation won’t be able to predict what will happen on the NYSE, but it will exhibit the same behavior as a stock market. The point of building an artificial consciousness isn’t to predict what a given mind will do, but to create a new mind.

    “Fallacy is what people are selling: that human nature can be reduced to [something] that [a] computer algorithm can run! This is a new church!”

    What Kurzweil is selling is complete materialism. If the brain is simply atoms interacting, and mind emerges from those interactions as an emergent property, then why can’t we create a mind by simulating those interactions? The brain may be so complicated that Kurzwel’s timeline is too optimistic, but theoretically if we recreate a brain it will be conscious. Unless consciousness is produced by something we don’t understand, like unknown laws of physics or a dualistic soul, then human nature can be reduced to atoms interacting, and computer algorithms can simulate atoms.

    I haven’t read all of Nicolelis’s comments, but this article makes it sound like he’s saying biology is special and mind is magical. If he’s religious or a new mysterian, then I understand his view point. But within the philosophy of materialism, everything Kurzweil says is doable once the engineering obstacles are overcome.

    • Elemee rudyilis March 10, 2013 on 2:11 pm

      Well said, rudyilis. This argument reduces to the ‘specialness’ of The Biology, which asserts that Mind has some dimension (‘Spirit’?) that has some essential relationship with biology that cannot exist with non-biological substrates. Dismissing Kurzweil’s point of view in this way does seem to equate with being a ‘new mysterian’. But the argument is effectively moot, because in reality biology is merging with the technology it is creating for itself. What results will still contain that which has been evolving all along, and the human mind will be found within it the same way we find earlier stages of our development recapitulated in our own neuroanatomic and behavioral makeup.

    • Cyantific rudyilis March 12, 2013 on 2:24 am

      rudyilis pointed out exactly what I thought when I read the quote about the stock market. The stock market can’t be predicted, because it is a very specific evolution of events. If you want to compare stock markets to life, then predicting the events happening next week in the stock market would be comparable to predicting a child’s future job. Creating artificial life would be comparable to creating a completely new stock market. Nicolelis’ logic is completely invalid.

    • Denis Krasnov rudyilis March 12, 2013 on 10:29 pm

      In your analogy consciousness is money–you can create a simulation of the stock market, nut you won’t be able to buy any bread with it !

      • rudyilis Denis Krasnov March 13, 2013 on 6:11 am

        I could buy bread with the simulated money if people decided to accept it, like bitcoins. The money traded on the NYSE is as imaginary as rupees in Zelda or galactic credits in Star Wars. The only difference is people decide to believe the abstract concept being traded on Wall Street can be traded for physical matter.

    • John Finnan rudyilis March 18, 2013 on 7:11 am

      Nicolelis reminds me of Kasparov, who was absolutely sure that no computer could ever beat him at chess. Until it did. All claims that it played like a god, or the engineers cheated, or that the machine showed true creativity or some such can be thrown out the window. Chess is a searchable game, and with enough computing power and the right deep search algorithms, even a grand master can be beaten thoroughly.

      In years to come, the idea that the human brain is not capable of being modelled by computer, will be as outmoded an idea as the human soul. A by-product of an era when humans thought there was something rather special about being human.

      • DigitalGalaxy John Finnan March 18, 2013 on 4:26 pm

        Oh, I don’t think so. I’m not saying it is impossible for a human grand master to lose to a computer, but the Kasparov game had so much controversy surrounding it that I’m not sure it can be counted as a decisive win for computing.

        There were real questions whether or not the engineers cheated. For one, they refused to give Kasparaov Deep Blue’s computer logs, even after the fact, so his programmer could examine them after he accused them of cheating. For two, the computer apparently played very differently in game 3 than it did in game 1 or 2, when Kasparov was winning. For three, Deep Blue was hastily disassembled and removed from the premises after the game, which is a confusing move the say the least. Deep Blue had supposedly won the greatest match of all time, and instead of being taken on tour or placed in a museum, it was scrapped for parts. Kasparov maintains that was so it could not be examined.

        Kasparov claims that a chess master was behind the controls, and manipulated the computer at key moments in the game. None of Kasparov’s team was allowed to actually see Deep Blue in operation, which adds fuel to the fire.

        I don’t know if the Deep Blue team cheated or not. What I do know is that refusing to hand over the logs to Kasparov’s programmer was a grave mistake, in that it allowed Kasparov’s claims of cheating to gain merit. If they had nothing to hide, why not simply turn over the logs?

        But, back to the important point, I do not believe the idea of a human soul is outmoded. Feelings, qualia, emotions, and consciousness do not have any place in a computational system such as the brain.

        I certainly believe the brain can be digitized. I do not believe the physical brain is all that accounts for our mind.

        • Kyle McHattie DigitalGalaxy March 20, 2013 on 12:38 pm

          I couldnt agree more. The Deep blue team obviously cheated. Kasparov asked for a rematch but would only do it if both sides could be audited while the matches were played. IBM refused. That alone is enough for me to be convinced that they cheated. If deep blue “learned” how to beat Kasparaov, why not put an end to any doubt? Because they know it wasn’t deep blue that beat him. It was probably Bobby Fisher.

        • dobermanmacleod DigitalGalaxy March 20, 2013 on 9:53 pm

          Interesting background on the Deep Blue win – I always thought K was a sore loser. Although, I am still unclear a way for Deep Blue (i.e. the programming team) to cheat, since the only way they could get better moves would be to get a world-class human chess player to kibitz.

          Given your (by all accounts) brilliant mind, I am surprised that you elevate feelings, qualia, and emotions to the level of a soul, since that is just the lower brain interfacing with the neocortex. Furthermore, the consciousness is just the tip of the iceberg, with the bulk (the subconscious) under the waterline. The neocortex is just a biological self-programming highly recursive pattern recognition hierarchy (per the PRTM which has already borne fruits in AI development) . In my opinion, the history of the psychology of AI shows clearly that mankind thinks romantically about their mind, minimizing any task AI masters – like in chess, where after Deep Blue’s win the mainstream view was that chess was “just” an exercise in math, whereas before the win everyone (at my chess club at least) thought that chess was a brilliant line separating human and AI. Even Watson’s win in Jeopardy against two human champions is minimized as them being “steamrolled” by a computer. Already, chat bots have won a Turing Test competition ( http://mashable.com/2012/06/29/turing100-winner/ ), although Kurzweil concedes they aren’t ready for prime time, and he still hasn’t revised his date for that event to occur.

          • Blair Schirmer dobermanmacleod December 3, 2013 on 3:54 pm

            No offense, but what the guys at your chess club thought doesn’t mean much. What computer chess showed us is that well-written software can beat humans beings who play the game rather differently from the software, and who play the game in a way that may or may not involve a special branch of consciousness the computer cannot (yet) emulate.

          • Blair Schirmer dobermanmacleod February 28, 2014 on 2:17 pm

            “Already, chat bots have won a Turing Test competition ( http://mashable.com/2012/06/29/turing100-winner/ ),…”

            Sorry, but that’s just nonsense.

            The best chatbots can’t even hold up their end of any slightly unorthodox exchange lasting three whole turns (six sentences, total).

            We’re still that very, very big breakthrough away from anything other than our current and very, very soft AI.

      • Blair Schirmer John Finnan December 3, 2013 on 3:48 pm

        While generally correct, the one thing your post misses is that while computer software can now trounce the best Grandmasters at chess, I have yet to hear a sound argument that the software is doing any kind of real ‘thinking’. It’s merely executing a number of well-written algorithms towards a very specific end.

        I’m not persuaded yet that those sorts of algorithms, in sum, and even vastly expanded, will ever constitute thinking in the way we we consider humans to think (and feel, and dream…). I do think it’s all but inevitable that the behavior that emerges from complex algorithms will appear as a kind of consciousness, and I also believe we won’t be able to easily draw a line that distinguishes ‘our’ consciousness from ‘theirs’.

        The one thing I do know is, it’ll be absolutely fascinating. It’s odd to me, though, that as late as 2013 the best AI can’t even begin to simulate an intelligent conversation.

        • Facebook - prodromos.regalides Blair Schirmer December 27, 2013 on 3:49 pm

          The brain is completely computable and anyone who thinks otherwise, it’s the “scientific” ego and millenia of established beliefs that hinder them from discerning the obvious.
          Whether there’s a soul or a spirit, is another different topic.It may be or may be not, but so is the unicorn, the big foot, the chinese dragon and a teapot on the dark side of the moon.
          Someone would say; yes, but the computer will never have conscience or feelings. The problem is we don’t know what is conscience or a feeling for ourselves, yet we light heartedly mention it when it seems fit .
          Someone would say, but the computer cannot carry out a simple conversation. Neither the human brain can, at least not before a huge input of data during the first 5-10 years of life and after dynamic interaction with these data and the surrounding environment.

          • Facebook - richard.r.tryon Facebook - prodromos.regalides December 27, 2013 on 5:06 pm

            You confirm my point! Since the AI computer can’t prove God- neither could my father in 40 years of trying; or me in forty more- its obvious that when the world is ruled by AI with some human brain attached awareness from someone, there will be AI robots of faith and those without. This makes no difference if the AI is controlling.
            But, if the human associate can still veto anything, the war between robots with AI with or without faith will follow. Hopefully, the ones with the AI able to recognize that the conscious part from man has a point about origin that refutes the one from those that accepted one of chaos theory making all that we know and more in a mere 15 billion years without any external and eternal influence.

        • alphasun Blair Schirmer March 15, 2014 on 5:09 pm

          See the novel “Whispering Crates”.

          • Facebook - richard.r.tryon alphasun March 16, 2014 on 7:42 am

            Yes, rocks and crates have experiences that may be well known to them! How nice!

            Outer space has experiences too, but even with no risk at going Bruce Thomson doesn’t seem to want a copy of his brain content to go along, even if updated every hour for the rest of his physical body’s life being used to support his aging brain.

            Does it mean that his conscious is private?

    • ega rudyilis December 26, 2013 on 5:00 pm

      Why would a machine that simulates a brain be conscious? Look at 3D games we have today, where a computer can simulate the motion of a human being. Nobody in their right mind would say it IS a human body because it looks as one.

      Same with a brain, we may be able to simulate how a brain thinks using advanced algorithms and a lot of computer power, but there will be no brains waves. no chemical reactions, no electrical signals between neurons…

      In the end, it will only be a simulation, a very complicated calculation.

      • palmytomo ega December 26, 2013 on 5:40 pm

        ega You’re miles, miles, miles behind the times… = )

        You’re only a heap of mechanical atoms and molecules and electricity too.
        You were mechanically manufactured by your mother and father using special
        meaty lab tools. So you also can’t possibly be ‘conscious’. You are just a
        physical meaty ‘thing’. = )
        Seriously though, this issue is already well resolved by people who’ve
        investigated it (e.g. watch ‘The Singularity is Near’ movie by Kurzweil and
        cronies.
        Here’s how we accept another thing’s ‘consciousness’ or not:
        (a) We’d all agree that consciousness isn’t ‘yes or no’, but varies in
        contexts, and is on a continuum from almost catatonic or sleep, to fully
        alert genius.
        (b) It’s false to insist that the ‘commodity’ consciousness be religiously
        locked exclusively to humans & animals. If anything at all behaves so like
        a conscious animal or human we find – to our surpise – we
        *emotionally*accept its consciousness regardless of whether it’s
        biological. We ‘value’
        it as an s*omewhat equal to us *for vital purposes. Note: This includes
        both ‘good’ behaviour of the thing, such as robots sacrificing themselves
        to rescue your drowning child, and ‘bad’ behaviour, such as nasty enemy
        army robots.
        (c) We’re not just talking about simulations in video games. We’re talking
        about AI that has significant sensor, cognitive, analysis and output
        abilities AND these days, the ability to autonomously ‘evolve’ – it does
        that by neural net processing that makes it learn from experience. It is
        ‘dynamic’ and ‘unpredictable’ because of the immensity of the storm of data
        of its experience and processing – just like an animal or human.
        If it’s any consolation, I was mind-boggled for months recently coming to
        terms with this stuff, even despite some years of working in AI in Montreal
        in the late 1980s. And note well: It does NOT affect whether you enjoy
        eating an icecream or not.
        Regards, Bruce Thomson in New Zealand.

  • Chris F March 10, 2013 on 1:51 pm

    How arrogant to state flatly that “the brain is not computable and no engineering can reproduce it”. That may turn out to be true, but we simply don’t yet have enough info to decide. But I think the evidence so far is weighted strongly on the side of the singularitarians : if a brain is made of regular matter (as it appears to be) then it can in principle be computed. It’s a question of when, not if.

    • Kyle McHattie Chris F March 20, 2013 on 12:39 pm

      Yes. Arrogance is an understatement.

    • Steven Kaufman Chris F March 31, 2013 on 2:47 pm

      I dont think that a Blonde’s brain can be logically computable.

    • Steven Kaufman Chris F March 31, 2013 on 2:56 pm

      I speculate that speculated speculation is speculatively speculate.

    • Facebook - richard.r.tryon Chris F March 21, 2014 on 8:36 am

      A human brain’s functionality in terms of processing input logically is certainly easily duplicated in our modern high speed computers with far more storage or memory capacity than even Einstein managed to control for a long time. That is not an issue!

      What is an issue is how to make a machine that can respond illogically, as a result of a chaotic range of mental, physical, and unrelated surroundings. The subtle meanings of touch, eye contact, facial expression of a human to human body to body are not easily absorbed by a shiny plastic robotic body that doesn’t eat, drink, or become emotionally aroused by chemical, smell, or other factors in combination with events, settings and momentary urges that are irrelevant to a robotic body and mind combination.

      Granted, if all humanity is eliminated, it might set the stage for AI and a robotic means of enabling it to move its brain from one locale to another. Sending such into space in every direction may be an important way for the unemployed to be involved. If some want to abandon their human body and put their mind’s memory into some part of A/I for the ride, it seems to me to be a form of suicide that is immoral to those who are essentially saying they do not believe that they have any obligation to a Creator that built the means of their coming into existence.

      God gave us free-will and capacity to self destruct, although the soul is His forever and it always leaves the human body that enabled its activation upon its birth. Those that do not believe can and will ignore. That is ok. May they enjoy and even plan to send information back to Earth as they witness the A/I guide their light powered space ship dodge all matter that lies in their constant or changing trajectory. Their minds will be perpetually powered by the A/I controlled ‘life-like” support system. Who knows two or more of them in the same vessel may even come to experience some sort of joy together short of reproduction of or sharing of their DNA codes.

  • Justin Rens March 10, 2013 on 2:29 pm

    I’m guessing he believes machines can’t do language translations, drive cars, run large scale logistics, etc either. I think the biggest thing people like Nicolelis fail to understand is that we don’t have to copy nature to mimic it. Our planes don’t flap, and our cars don’t run yet we benefit by being able to fly like birds, and out pace any other animal on this planet.

    Also I’d be interested to know what he believes are the reasons preventing us from EVER being able to replicate a brain. Those trying, like the European Human Brain Project, at least iterate the known limitations, and computational complexity involved. Nicolelis only seems to throw church like proclamations (ironic given his statement “This is a new church!”)

    • DigitalGalaxy Justin Rens March 10, 2013 on 2:47 pm

      It depends. If you are trying to create an AI, you don’t have to mimic nature. You can make a computer that doesn’t act like a brain. But, that’s not exactly what he is talking about; you can’t copy a human mind into a non-brain-like AI.

      He is talking about mind – downloading, which would take your brain waves and copy them directly into a a computer simulation. In that case, you would have to mimic nature by mimicing neurons. You can’t take human brain waves out of a neural structure, put them in a computer-logic structure, and expect them to still work. It would be like taking an old Mac program and trying to run it on your PC; the hardware is incompatible. What he is talking about is mimicing neural structure with hardware; so once you put the brainwaves in the simulation, they don’t ever “see” the computer logic, they “see” the simulated neurons. The computer would have to act like a brain, mimicing nature in this case.

      He’s not trying to build a plane with flapping wings. He’s trying to build an artificial bird. Or, more accurately, the Matrix. :)

      • Matthew DigitalGalaxy March 11, 2013 on 4:35 pm

        hmmm. emulation? we actually take old hardware and replicate it on completely new alien substrates with 100% accuracy. i have 8-bit nintendo, super nintendo, playstation 1 and 2, gamecube, and wii on my windows computer. at first there were glitches and lag, but at a certain point they run the old games flawlessly and even with enhanced resolution/textures/framerates/etc being possible.

        this guys comment “You could have all the computer chips ever in the world and you won’t create a consciousness.”–is kinda dumb because we now have pocket sized devices that contain more computational capacity than all the world’s best supercomputers of a couple decades ago. we aren’t going to be using all the punch cards and vacuum bulb transistors that ever existed to get the job done. lol

      • Kyle McHattie DigitalGalaxy March 20, 2013 on 12:48 pm

        You are making large assumptions.

        “You can’t take human brain waves out of a neural structure, put them in a computer-logic structure, and expect them to still work.”

        Why not? I don’t believe anyone is saying that you can do this now. However, who are you or Nicolelis to claim it can NEVER be done? Are YOU or him now able to predict the future?

        Who is to say that we don’t suddenly start to understand how the neurons create a particular person’s wave parttern and that we aren’t able to exactly duplicate it?

        There have been vast amounts of breakthroughs in biology and technology in general, in the last 5 years alone. At the pace we are making advances in ALL scientific fields, it’s the utmost in arrogance to claim permanent limitations that may (and in many case HAVE) be removed very simply.

  • DigitalGalaxy March 10, 2013 on 2:40 pm

    I think the point here is largely being missed. There is no mechanism, neurological or otherwise, in the brain the could conceivably be involved in the production of consciousness or qualia. Consciousness is not something you can weigh, point to, or quantify. This is in marked contrast to neural signals and synapses, which can be entirely qualified, pointed to, and sketched out. There is no way to link “X neural signal” to “Y qualia”. There’s no reason that a stimulated pain neuron should create pain qualia. If there is pain qualia, where is it? There has to be some physical location in time and space for the pain qualia, and its not forthcoming.

    Any amount of neurological activity can be quantified at the point where we have sensors on every single neuron in the brain. But, quantifying those neural signals will not help us because none of them can equate to consciousness or qualia. We are at a point where we are delving into areas that materialism cannot metabolize.

    I’m certainly not appealing to ancient religious doctrines to explain these phenomena, but materialism has reached the end of its leash. Some spiritual explanation is certainly going to be needed at some point.

    The man in the article seems to be missing the fact that, at some point in the not so terribly distant future, we will have the ability to monitor, in real time, every single neuron in the brain with nano-machines. Then, we can simulate those neurons in a supercomputer, and translate the physical state of each neuron directly into the supercomputer. Of course this is beyond our grasp right this minute, but once we get down to the molecular level with nano-bots, (which isn’t that far off), we will be able to do this. The “evolutionary copy protection” he is referencing can be circumvented with enough technology. But, simply because brain-states can be copied, does not mean that brain states equate entirely to mental states. Brain-states obviously have some important relation to mental states. But, it takes a materialist attitude to claim that brain states are all there is the mental states, and that materialist attitude is showing its age.

    What we need are spiritual reference points for the modern era. Right now, all of our spiritual reference points come from an agrarian past, and they are ill-equipped to function in the present.

    I don’t believe that mind-downloading is possible. Not without some understanding of the spiritual nature of human beings in particular, and life in general. We could copy every brain wave perfectly, every neuron, and those waves might collapse within seconds of entering the simulation. There is more the the mind than simple brain chemistry.

    • rudyilis DigitalGalaxy March 10, 2013 on 3:05 pm

      When I read comments about not being able to create mind within a computer, I wonder if the authors are arguing from the position you are, DigitalGalaxy. It would be a lot clearer if Nicolelis said, “consciousness is produced by something we don’t understand yet. It’s not simply neurons interacting. Therefore, simulating neuronal behavior won’t work.”

      Most neuroscientists tend to be materialists. When Nicolelis says brains aren’t computable, that doesn’t make sense within a materialist framework and he gets backlash. If it was clear that he was arguing from a different metaphysic, people would still disagree, but at least we would know the lines of the debate.

      You make some good points. If mind is an emergent property of neuronal interactions, logically we should be able to recreate that. If mind is caused by a phenomenon we don’t understand yet interacting with brains (Penrose and Hameroff’s quantum model of consciousness for example, which I’m not advocating), neuronal simulation models could recreate human behavior, but not necessarily have a mind inside of them.

      • rtryon rudyilis March 10, 2013 on 3:39 pm

        AI is already here to deal with mental processing of some very sophisticated activity. Show me one that can in an instantaneous fashion absorb the state of all other minds, bodies, equipment, lighting, score and emotional status of players and audience, playing the same tennis game, and you will have a robot that plays tennis as well as a pro with the same aggressive or defensive style as is needed at that particular moment based on a constantly changing ability of that mind’s body to respond to the challenge in the perfect fashion.

        Why bother to try to make one? God has let us build many thousands of candidates who try to be the best and able to take advantage of all factors when possible and even to lose gracefully on occasion. Remember to include these factors too.

        • palmytomo rtryon March 10, 2013 on 10:30 pm

          If you think there is a ‘god’, then this god has led us to the option of replicating biological humans with non-biological ones. We’d probably agree, smiling, that even if we do, they’ll still have intrinsic weaknesses and defects that cause chagrin. That’s the nature of reality (and the relative nature of happiness). But a few incentives for accepting this offer are: Being freed of biological constraints… Not dying – with the loss of all experiences stored in the brain. Eliminating biological illnesses. Being able to configure our bodies and minds in any way we could imagine and engineer. Having less impact on Earth’s ecology, or choosing to recover it and enhance it. Tolerating G-forces for extreme acceleration in outer space.

          • DigitalGalaxy palmytomo March 11, 2013 on 1:01 am

            I agree that God has led us to the point of being able to replace human biology with machinery. But, I think whether or not we should is an open question. Just because we can do something does not automatically mean we should. Would we lose part of our humanity if we left behind our biological bodies?

            I am not saying yes or no, just that it should be considered. We can do all those things you mention; travel outer space, recover Earth’s ecology, and almost eliminate biological illness, without machine bodies. We may even be able to eliminate aging though genetic manipulation. Do we lose humanity if we lose our human bodies?

            Or, on the other hand, do we transcend humanity if we lose our human bodies?

            It is a very powerful question!

      • Gorgand Grandor rudyilis March 10, 2013 on 3:58 pm

        The problem is that spiritual explanations are inherently non-explanations. They don’t really on anything measurable and observable, so the best we can do is “it simply is that way and we’ll probably never know why”, as there are no mechanisms to examine to come up with explanations if you go beyond materialism.

        All we can do is infer consciousness in others, but to the extent that we can, it’s due to materialism. I have qualia, and I know the generation of that qualia is at least someway dependent on my brain, because when I’ve been put under by the dentist, it has slipped away and then gone at a point I can’t quite identify, and I blearily come back to full consciousness again (the fullness of all qualia). Indeed even going to sleep proves this. The fact that getting hit in the head hard makes the qualia changes proves it too. It also shows that it is deconstruct-able and able to be generated in a lower form than the whole generated by a fully integrated brain. I can only demonstrate this to myself however, and this can never be shown to others, as it is qualia, and inherently unreachable by disconnected minds. This does not mean it doesn’t arise from materialism, however. The sensation of the color blue is an example of qualia, but if we removed the cones which detect the light we translate into what we call blue, people will observe internally that they no longer see that color.

        Given that I seem to depend on my brain to generate consciousness, and this brain I have is similar to other human brains, and these brains are an evolved material structure, then it is likely, though I can never 100% prove it (but then, this is true of almost anything), that other humans are also feeling and not just “philosophical zombies”, and also likely that any future beings created with a brain like that of humans or will have something approaching the range of qualia I experience.

        Other more alien structures may also be conscious, but there’s no reason their qualia wouldn’t be just as alien to my experience. Perhaps the only way we could confirm if such future systems are conscious is to integrate with them (even if temporarily) and see if our conscious sensation ‘expands’ or changes, but apart from this thin hope, the best we’ve got is dim inference, and bringing in spirituality helps none, as it’s a doorway to adding extraneous things which will muddle the issue even further.

        • DigitalGalaxy Gorgand Grandor March 10, 2013 on 8:51 pm

          I must disagree. Materialism must, at some stage, address what exactly it is that produces conscious experience, and qualia. Materialism, at present, has no tools to accomplish this.

          Of course qualia are linked to brain activity. The problem is that there is nothing in the nerve signal itself that has any business producing qualia under a materialist framework. Let’s take your dentist example. You can go to the dentist, and he will give you a chemical (a material substance) that causes a molecular reaction (all material) that prevents your nerve synapses from firing. When your pain signals from your nerves disappear, so do your qualia. This is a phenomena that is perfectly repeatable, testable, and subject to materialistic theory. No spooky stuff here.

          We can tell at this point, that we have a “paper trail”, so to speak, of material causes.

          1. Physical stimulation cases pain neuron to fire. Signal propagates to brain. Bio-chemical processes understood. Check.

          2. Dentist administers chemical through hollow metal tube. Chemicals interact with ion gates. Synapses become inactive. Signal ceases to propagate. Bio-chemical processes understood. Check.

          This is all that materialistic science can tell us at this point. Right now, we have a electro-chemical signal, passing through cellular “checkpoints” of nerve cells, until it reaches the brain, at which point something ELSE happens, something for which there is no “paper trail”.

          The pain signal turns into the qualia of pain.

          This happens independently of any involuntary reactions that might make one’s head jerk suddenly. We feel pain at that moment, it doesn’t simply cause a feedback loop that makes our head shake (although it does do that too).

          And here we run into the problem for materialism. Under materialism, everything needs a location in time and space. Everything, including qualia. The pain signal, the real, electro-checmical object is all that materialism has room for in its metaphysical framework. The pain signal certainly doesn’t cause pain qualia when it is generated at the tooth!

          So, where is it? Is the pain qualia in the brain? If it is, where? What makes an electro-chemical signal turn into this mysterious phenomena known as qualia? It doesn’t matter that removing “blue” rods removes the blue qualia; the question is how the “blue” never signals become qualia at all!

          At the present time, materialism has no answer to this question. I claim that this answer needs to be a spiritual one.

          You are claiming that spiritual answers are essentially non-answers because a) they do not rely on measurable evidence; and b)they instead rely on saying “well, that’s just how it is, end of story”. This stifles inquiry, promotes intellectual dead-ends, and generally makes humans complacent instead of making them want to discover more about the universe.

          I believe that this analysis is partially correct and partially incorrect. Many forms of spirituality derive from our agrarian past, and they do in fact stifle innovation and creativity. Please remember that we were all farmers only a century ago, and there is an aspect of our society that hasn’t entirely caught up to the modern era. More modern forms of spirituality are open to scientific inquiry and do not have this problem. I would ask you not to blanket all spirituality as being from the last millennium.

          Furthermore, there are areas of inquiry where spirituality is far more useful than science. Issues of human worth, human meaning, social justice, God, morals, and emotion are all issues where the “research tools” of spirituality are far more suited for the task than the research tools of science. These issues are not subject to the scientific method because they are not repeatable in nature. You cannot put questions of human worth down to a double-blind controlled experiment. They cannot, by definition, be measured or extracted or put under a microscope.

          Why do people have intrinsic rights? Why is causing suffering wrong? Why do we feel the way we do? How can we live better lives? How can we advance the human race morally? Why are civil rights important? Why is negative eugenics bad? What is the nature of life and death? What are spiritual experiences?

          These are all very real questions, that have very little to do with physical phenomena. They also have very little to do with science. Spirituality does in fact provide real answers to real questions. They may not be right all the time, but it is a part of society that does provide both explanations and answers. And, at some point, science and spirituality will have to overlap on the question of brain research, because materialism is coming up empty on the big questions; qualia and consciousness.

          Spirituality doesn’t have free reign over science; young-earth creationism is simply an example of anachronism. Intelligent design is not pure science, but it does provide an intriguing (possible) answer to a question that science is unable to satisfactorily answer; how did inert material become living cells?

          I don’t believe that materialism is a plausible theory of everything; it cannot explain all there is to human experience, even if it can explain all there is to the material world. I do believe religious doctrine should be kept out of science until it is appropriate. Embryonic stem cell research is a prime example; science was pursuing a line of inquiry that was ethically wrong; trading one human life for another. It ought to have been shut down, and research focused on adult stem cells, which are proving more promising in any case.

          Spirituality has a very real place in human inquiry, and soon spirituality and science will be forced to overlap. It won’t be long before the entire brain is mapped, neuron by neuron.
          And, still we have no qualia, no location in time and space of feelings or our “inner observer”. It is at that point, I think, that spiritual injection to science becomes appropriate.

          That’s not to say that a computer is incapable of having qualia, but it can’t be a scientific answer only. You can easily measure the wavelangth of blue light, and the signal strength of a rod. But, you can’t measure the color blue. That is something else entirely.

          • Ver Greeneyes DigitalGalaxy March 11, 2013 on 4:58 pm

            I don’t think materialism is necessarily *required* to explain consciousness. All it needs to do is give us the materialist building blocks, then show that (something very much like) consciousness can emerge from them.

            It *might* be possible at that point to go back and figure out at what point the network began exhibiting signs of consciousness, and what structures are associated with it. But the line between consciousness and no consciousness might also be so fuzzy that there ends up simply being a critical point of complexity beyond which (what we perceive as) consciousness can emerge.

            Materialism doesn’t give steps to explain emergence as, by definition, emergence occurs when complex interactions between the simple building blocks lead to behavior that seems to be ‘more than the sum of its parts’. But I don’t think we need to know everything about the brain to grow the network structure needed to reproduce its behaviors.

          • Herbys DigitalGalaxy March 18, 2013 on 1:03 am

            You “claim” many things, but offer no supporting evidence. Even simple software algorithms can be substituted for the brain in your explanation and your reasoning (or non-reasoning) still works.
            There are many models of the mind (even simple ones, like agency-based models by Minsky) that once implemented at many nested levels can produce emergent properties that are not evident at each individual level.
            But the most basic objection I want to present here to your reasoning is this: the fact that we don’t understand something doesn’t mean that its’ understanding is beyond our capabilities. A scientist from fifty years ago wouldn’t understand how an iPhone can play Angry Birds, but it would be silly for them to give it for granted that it was magic that made it work.

          • DigitalGalaxy DigitalGalaxy March 18, 2013 on 5:02 pm

            @herbys
            What exactly do you mean by an “emergent system?” An emergent system, as I understand it, is something that functions as more than the sum of its parts. As such, the system produces something “extra”, that is above and beyond the system. This is a violation of conservation of energy, and thus, an explanation that is reaching beyond scientific inquiry.

            If, by “emergent system”, you mean a system that works off of different parts within it to produce unexpectedly complex results, (a colony of simple ants forming a complex anthill, for example), you are not giving an explanation of how a phenomena of one type (feeling or qualia) is produced by a phenomena of a different type (computation).

            There is no mechanism of action by which computation alone can produce emotion, qualia, or consciousness. This is not based off of a lack of understanding, as you were going for in your example of the 50s scientist being unable to comprehend a cell phone game. It is precisely because we DO understand what makes up the brain that we can say there is no mechanism of action for qualia.

            We understand exactly what makes up a neuron, down to the chemical level. We understand what makes up neural signals, down the electromagnetic level. That’s the solid supporting evidence you were looking for. We know exactly what makes up the brain, and exactly how these constituent parts function. We understand how neurons fire and how brainwaves propagate. And still, nothing jumps out at us to say, “ah! here is our capacity to feel!” We understand that certain chemicals stimulate certain emotions, but we don’t understand how that chemical stimulation turns into the literal feeling of an emotion, and we have all the pieces together already!

            The brain waves themselves still being a mystery is the same thing as a computer being a mystery because we can’t sense the position of every electron on every part of the the chip. Use an electron microscope and the mystery goes away for the computer. Use a nano-bot inside the brain and the mystery goes away inside the brain.

            We are about to literally have micro-machines inside the brain, give it a decade at most. We don’t have a lack of understanding, that’s the problem. We’re done in ten years. And we are no closer to solving the mystery of consciousness or emotion than we were when we started.

            Spiritual things have always centered around human emotions, transcendent experience, and consciousness. None of those have anything to do with computation, biological or artificial. I don’t think its an overreach of spirituality to attempt to explain those things in its own way, especially when science is about to hit the “bottom floor” of brain inquiry in a few years, and has no mechanism of action for the things we feel everyday.

            Science has limits. It is limited to things that appear objectively in the physical world which our senses give us access to. Your emotions, your feelings, your qualia, even your consciousness, do not do that; they do not have the property of being objective. They are instead subjective; only you can feel your own conscious experience.

            Spirituality has limits, too. The study of material objects and the physical laws of nature should be left to science. Spirituality has impeded scientific progress for centuries, but I believe this is beginning to change.

            I think if each area of inquiry can learn its own limits, we would be much better off. This area of brain research is muddy; but its muddy because its a big question! And I think we will need both science and spirituality to ultimately answer it. Science to study the material properties of the brain, and spirituality to study the properties which do not manifest themselves as physical or computational; emotion, qualia, consciousness. These are something different from normal matter or computational programming. It’s easy to write a program that acts happy, but can you really make an algorithm for the feeling of happiness itself? That’s not such an easy question to compute.

      • DigitalGalaxy rudyilis March 10, 2013 on 9:13 pm

        I fully agree and well said! Opponents of mind-downloading either need to choose:

        a) I believe that there is something non-material that is intrinsic to the mind, and that is why we cannot copy it, or

        b) there is some special quantum-stuff in the brain that we can’t copy (yet)

        Saying “the brain is just TOO complicated, it will never happen!” is kind of like saying “mapping the human genome is just TOO complicated! It will never happen!”

        It seems like the guy in the article has not kept up on his nano-technology. He should read SH! :)

      • DigitalGalaxy rudyilis March 10, 2013 on 9:43 pm

        Opps, my second comment directed at Rud

      • palmytomo rudyilis March 10, 2013 on 10:47 pm

        Examine your definition for ‘mind’. It’s useful to recognise simpler ‘minds’. I’d say a chimp has a ‘mind’. Even a bee. A mind is something that has input/output and processing. It’s useful to be able to apply the term to non-human ones. Even ‘partial’ minds (Watson, the chess playing system is limited to chess, but surpasses human minds, and I once created and expert system called ClaimCheck, which was an insurance assessment mind that was superior to most medical practitioners at judging claims.) Every concept’s applicability (including ‘mind’) depends on the time and context.

        • DigitalGalaxy palmytomo March 11, 2013 on 5:51 pm

          If by a “mind” you mean an intelligent system capable of taking input and processing output, that’s a fair definition. But, it does not encompass all of what goes on inside our heads. Human beings, chimps, and bees do not simply take input and process it into output. All of those things feel. All those things, humans and lower animals, have sensations, have consciousness, and have free will, which computer minds do not have.

          That’s the problem! The problem is not making an intelligent computer system, even a system that is more intelligent than your own brain (I’m sure your program was more efficient than most claim adjusters)! The problem is making a program that actually feels or is aware. Watson is not aware, nor is my cell phone, yet in some areas they are more intelligent than me. Intelligence is not the same thing as awareness or cognizance!

    • Ormond Otvos DigitalGalaxy March 10, 2013 on 3:32 pm

      “There is more the the mind than simple brain chemistry.”

      Yeah, there’s complicated brain chemistry, maybe some molecular physics.

      You sure make a lot of flat statements. How unscientific! Hasn’t the history of science taught you anything?

      • DigitalGalaxy Ormond Otvos March 10, 2013 on 9:06 pm

        Once we have non-bots in the brain, everything will seem pretty simple. Neural net self-assembly will probably seem pretty easy to grasp, seeing as it must begin from only a few brain cells in an embryonic stage. The brain chemistry itself will probably seem complicated at first, but once we can observe it directly with nano-machines, it will all fall into place pretty easily.

        I assume you mean by “how unscientific!” that you assume there is nothing in the universe that cannot be explained by science? That in itself is a flat statement, and it begs the question just as much as “God exists because He just does.” Saying “science can explain everything because it just can” is the same. Science cannot explain what causes qualia, conscious experience, or emotion that is felt and not simply a chemical feedback loop. That opens the door for spirituality, as I see it.

        The history of science has taught me that many scientists, including Newton and Einstein, were spiritual. These great scientists didn’t seem to find any conflict between their scientific progress and their spiritual views. One was a devout Catholic and the other could be described as a Spinozan. Even Stephen Hawking has had spiritual views. So, I don’t see how appealing to the history of science really helps a materialist worldview, except to point out how religious authority has blindly stood in the way of progress on many occasions. However, you won’t find that too much anymore; there are many versions of spirituality now that welcome scientific inquiry, even if some fundamentalists stubbornly hold out.

        • Torgamous DigitalGalaxy March 11, 2013 on 2:14 pm

          The history of science has taught me that people tend to place things not currently explained in the category of “unexplainable”. Since you appear to prefer name-dropping to trends or theories, have you ever actually looked at Newton’s spiritual claims? Surely it must be God who keeps stars from falling into each other, for no material cause could possibly serve as an explanation! Or “For while comets move in very eccentric orbs in all manner of positions, blind fate could never make all the planets move one and the same way in orbs concentric”. There’s plenty of gems like that in Newton’s writings if you look for them. Also, keep in mind that Darwin wasn’t born until after Newton died, so it’s unlikely Newton had realistic views about the origin of species. So, yeah, the guy was spiritual.

          • DigitalGalaxy Torgamous March 11, 2013 on 7:12 pm

            Exactly! Newton was spiritual for his time and place in history. That was all humanity had back then, they didn’t have space travel or telescopes or advanced understandings of cosmology.

            Spirituality evolves along with human culture and technology. Long ago, astronomy and meteorology were the same science. Now they have diverged into distinct spheres of influence, because we understand now that clouds and stars are different things. There was a time when we as a species didn’t understand that; they were just “up there”.

            Long ago, science and religion were the same thing, and only priests and monks studied the heavens or the workings of the world. Now, they have diverged into distinct spheres of influence, because we realize (well, most of us do), that the scientific method is better for understanding the natural world than meditation or prayer or theology. We also realize that modern forms of meditation, prayer, and theology are better tools for understanding morality, feelings, spiritual experience, and in some sense the human condition, than the scientific method. A century ago, many of us did not understand that distinction. Now we do, more or less.

            Newton was spiritual, and he was a great scientist in his era. Trying to hold Newton to the scientific standards that we have today is a bit backwards, because it takes him out of his historical reference frame. The point I was trying to make is that science and spirituality are not mutually exclusive. Yes, old superstitions and or doctrines have hindered scientific progress in the past, but modern forms of spirituality do not have this effect. Einstein, too, was spiritual. He didn’t say “God does not throw dice.” metaphorically. He believed (according to many biographers) that God was the Universe, and by exploring it with science, humans were fulfilling their divine potential; delving into the Mind of God.
            That form of spirituality doesn’t seem like it hinders science with “oh, well, God just does it, so let’s stop trying to find out how or why.”. Just the opposite, in fact.

            It seems, in this case, that there is a bit of overlap between spiritual things (feelings, emotions, consciousness) and material things, (like atoms, neurons, and electrical brain signals). These are fundamental questions about human existence, what it means to be alive, and the limits of human potential. Maybe we need to use ALL of our tools, not just the scientific ones, to answer those big questions.

    • Justin Rens DigitalGalaxy March 10, 2013 on 4:11 pm

      Wait are we talking about creating an adaptive thinking machine, replicating a human mind, or uploading existing human minds. These are three very different things. To create a sentient AI I don’t believe we have to go the route of replicating the molecular details of a neuron, we just have to mimic the net result. I alluded to this when I compared our planes to birds. We grasped the concept behind flight without having to blindly copy the biological equivalent.

      Replicating a human mind probably gets trickier. In this case you do need to copy the entire system. Uploading is a step above that where not only must you copy the system, but you have to duplicate in real time its current state. At this point any theological, metaphysical or otherwise weird debates are welcome but until we know more they’re won’t be more than guesses.

      • DigitalGalaxy Justin Rens March 10, 2013 on 9:07 pm

        Exactly! Well said!

      • Ver Greeneyes Justin Rens March 11, 2013 on 5:06 pm

        While I mostly agree with you, uploading a brain could turn out to be simpler than replicating the entire network. If we can determine the way that memories are encoded in the brain and download them, we can re-encode them into whatever structure works best for us. Now there’s probably more to human personality than just memories, but even these learned behaviors and ways of reacting to complex stimuli can probably be downloaded in some fashion (though decoding different individual brains will probably be challenging). Once we have everything that constitutes a human personality, even if it’s only 99% accurate (let’s say for the sake of argument that memories can be copied with 100% accuracy), that might be enough to transfer a it into a new shell without anyone being able to notice any difference. Of course at that point it *is* important that the new shell continues to be able to learn and adapt, and in the same way as a normal brain would (if, perhaps, with enhanced capacity to learn).

    • palmytomo DigitalGalaxy March 10, 2013 on 10:38 pm

      Qualia can be sensed using equivalents to human senses, and recorded in the appropriate kinds of memory, for recall using the appropriate equipment. Consciousness is the mechanism of being able to objectively examine qualia. Computers are doing this all the time – evaluating, measuring, judging qualia, even using very ‘deep’ principles with which they have been programmed. Spirituality is programmable. The non-biological being will behave accordingly. The natural, but uncanny thing to us is how emotional we get when we see that happening – we can ‘love’ the machine because we identify so strongly with its ‘spiritual’ programming.

      • DigitalGalaxy palmytomo March 11, 2013 on 12:48 am

        I don’t think we are using the same definition of what “qualia” is. So we are on the same page:

        dictionary.com says:

        qua·le
        [kwah-lee, -ley, kwey-lee] Show IPA
        noun, plural qua·li·a [-lee-uh] Show IPA . Philosophy .
        1. a quality, as bitterness, regarded as an independent object.
        2. a sense-datum or feeling having a distinctive quality.

        A computer detects light wavelengths and stores them in its memory all the time. Your cell phone does that. What your cell phone does NOT do, is actually see color as such. The light has a specific wavelength. That wavelength is picked up by sensors and encoded as numbers. It can be reproduced, as the numbers are translated back into visible light wavelengths. Same thing happens in our brains. But, in our minds there is an extra step, one that does not seem to have a material cause. We see the light as color. Light on wavelength 450–495 is always light on wavelength 450-495. But, it isn’t blue until somebody sees it. Wavelength is a material property, measurable and detectable. What I see as the color blue is not. Computers do not have qualia. They convert light wavelengths to binary numbers, they do not see blue anywhere along that process.

        A computer could have all kinds of molecular sensors that mimic the tongue, and it could accurately record types of food, or inputs from those sensors. But, there would be no actual sense of taste for it.

        I don’t think that spirituality can be programmed. It can be mimiced, yes, but not programmed. You could program a computer to say a prayer or recite theology, but spirituality requires consciousness. A computer or robot might respond due to programmed “moral commands”, but there would be no spirituality underlying that without consciousness.

        If you have a different definition of qualia that I’m not familiar with, I’d love to hear it! :)

        • palmytomo DigitalGalaxy March 12, 2013 on 2:50 am

          Hey, DigitalGalaxy, I’d quite like to discuss the qualia question with you by Skype if you are interested in the next day or two. I’m getting much clearer on it, thanks to your questions. Having a sounding board is very catalytic. I’ve been musing for weeks about whether robots could ‘feel’. I’m now fairly sure that the answer is ‘yes’, which is quite a breakthrough for me. That is, if humans migrate to non-biological hosts, we’ll still actually ‘feel’. If interested, my email address is palmytomo@gmail.com

          • DigitalGalaxy palmytomo March 12, 2013 on 6:57 pm

            Sounds great! Did you get my email?

      • rudyilis palmytomo March 11, 2013 on 6:58 am

        Qualia by definition is a subjective experience, so it can’t be objectively examined. If computers are evaluating qualia all the time, we have no way of knowing. That’s what Thomas Nagel’s essay, “What is it like to be a bat” deals with. We can’t know the subjective experience of something by looking at its objective parts. I know I’m having a subjective experience (the temporary hallucination of consciousness is having a subjective experience) and assume other humans are having that experience too. But for all I know they’re philosophical zombies. There’s no scientific, objective way to measure a subjective experience.

        P-zombies are just a thought experiment until we start building AI that mimic high level cognitive behavior. Then there’s the quandary of whether they’re actually aware or not.

        The frequency for red can be sensed using equivalents to human senses and a measurement of that frequency stored. But we don’t know if a computer is actually experiencing “redness” the way a normal person can and a colorblind person can’t.

        • turtles_allthewaydown rudyilis February 28, 2014 on 9:45 am

          Rudyllis – I disagree on a couple points.
          1. Subjective things can be measured objectively. Doctors ask how much pain you’re in, they even have a set of faces for little kids, and they can point to which face they identify with (scale of 1 to 10). Surprisingly, people in a lot of pain (but not debilitating, screaming pain) don’t go to #10, they recognize that there are higher levels even if they haven’t experienced it.

          2. Computers don’t experience “redness” like we do. We know that, because we’ve written the code and we know exactly how computers experience it. They store it as a numeric value, and that’s how it stays. They don’t think of it as blood-red, brick-red, or a nice color in a sunset. Even with robots that move using object recognition will use color to help identify objects, but it still stays as a numeric value. They won’t be shocked if they suddenly see red on a persons face, or admire a rose. That is a fact. When we have actual AI, then this might change, but we are a long way from that now.

          • rudyilis turtles_allthewaydown February 28, 2014 on 10:27 am

            Let me clarify some of my points because I partially agree with what you’re saying.

            1. We can associate subjective experience with an objective measure, like pain with a number scale or pictures of faces. Both of us can look at blood and agree to apply the adjective “red” to it and agree that grass is not “red” but in fact “green.” However, you have no way of knowing that when I look at blood, I’m not actually seeing what you perceive as green, and when I look at grass, I’m seeing what you perceive as red. We both use the same words to describe a color, but are in fact seeing different colors. Practically this makes no difference. We’re able to agree on what objective measures to use and can function just fine in society and accomplish goals even though we have different experiences. It’s simply a philosophical thought experiment to address the nature of consciousness. We can’t access the subjective experiences of other beings (if you haven’t read “what is it like to be a bat,” check it out. That’s what I’m basing my argument on.)

            I agree with you on point 2. that computers don’t experience redness like we do, however, I can’t prove that and I’m assuming computers don’t experience redness (I disagree though that we know how computers experience anything for the reasons I state in my paragraph above. We don’t know what it’s like to be a computer. We don’t know what it’s like to be a bat. We can only imagine it and make assumptions.). Computers store redness as a numeric value. We store redness as a molecular structure in our neurons. For some reason, a particular arrangement of molecules in our brain causes the subjective experience of “redness.” This raises the question of under what situations does subjective experience occur? Can other arrangements of molecules that aren’t human brains experience redness? Can algorithms that are functionally equivalent to a human brain experience redness? We don’t have a sufficient understanding of why consciousness occurs to be able to definitely answer that question. For example, we know acidity is caused by free ions. We can definitely say if a substance is an acid or not because we know how physical matter behaves to make acids. We don’t know how physical matter makes consciousness occur, so we can’t say with absolute certainty one chunk of matter experiences red and another doesn’t because we don’t know what causes the experience of redness (a possible solution is integrated information theory which says consciousness occurs in systems in which information is organized into a whole, a value which can be measured. If that theory is true, we would have a way of measuring the consciousness of a system. I’m not sure if that theory could predict what systems could experience redness and what systems couldn’t, however.). We can make reasonable guesses. My toaster probably doesn’t experience redness because it doesn’t function the way a human brain does. But when we have actual AI, as you said, it’s going to be less clear. I believe that if something functions the same way a human brain does (let’s say an advanced robot), we can assume it experiences the world like a human, seeing “redness.” However, if a robot can engage in intelligent behavior but doesn’t process information the same way a human brain does, we won’t know how it experiences the world.

            To be brief, why can’t information stored as a numeric value in a computer be experienced as redness, but information stored as a molecule in a brain can?

    • maxcypher DigitalGalaxy March 10, 2013 on 11:27 pm

      The whole materialist/non-materialist dichotomy is a straw-man set up to fool everyone. Look deeply into the physics (that initially assumed such a dichotomy) and one sees that there is no “material” — only potentials of probability.

      What is strange about this is that the mathematics of probability assumes the asking of a particular question; otherwise there would be no “sample space” within which to sample from, n’est pas?

      In other words, the very mathematics of our best description of reality (quantum mechanics) assumes an entity asking a question. Is there some sort of dimension of consciousness prevalent throughout the Universe(s)? I don’t know; but my suspicions lean in that direction.

      OK, I’ll just come right out and say it: rather than creating consciousness by neural stimulation; perhaps our brains are very sensitive antennae that sense some sort of “consciousness wave”? Is there anyway we could confirm or deny this supposition?

      I’m really not sure. BTW, if you really are facing the issues at hand, you shouldn’t be sure either.

      • DigitalGalaxy maxcypher March 11, 2013 on 6:01 pm

        @ max, I wasn’t going to bring up Amit Goswami, because I was afraid that his ideas are too overtly spiritual for this forums and might end up derailing the conversation. But, his ideas are certainly providing a good starting point for understanding both spirituality and consciousness within the framework of modern, scientific knowledge.

      • turtles_allthewaydown maxcypher February 28, 2014 on 9:57 am

        If the brain is an antenna for this consciousness field, then we can construct a new antenna to tap into this field as well. This argument does not prevent us from making a conscious AI. We just really need to understand the physics involved, and we know there are important areas of physics where we’re still trying to even ask the right questions. (GUT, how many dimensions are there, what is dark energy, etc.)

    • palmytomo DigitalGalaxy March 11, 2013 on 1:36 am

      Let’s look at two words we’re using.
      CONSCIOUSNESS
      I’ve found the most useful working definition is the ability of something to, in any way or to any extent, sense something else with respect to itself, even the sense of ‘that thing exists, that thing does not exist’. It doesn’t have to be ‘self consciousness’. A snail is ‘conscious’ of a lettuce near it, relative to the snail’s body and the surroundings. Such consciousness serves the snail well, and is probably includes impressive data about lettuce type and quality that humans haven’t even discovered yet. The snail probably doesn’t think a lot about itself (but don’t forget, its sensors also can in fact even, in limited ways, perceive its own body as part of the universe, and act on that information. A cat can see and lick its paws – which is a limited, but useful ‘consciousness’ and ‘self-consciousness’.
      HUMAN
      All words are a useful but arbitrary symbol for something we want to deal with. Until now, ‘human’ has mostly just meant ‘that thing like me over there’, as distinct from ‘that other thing that moves but is significantly different from us for hunting and mating purposes’. The updated definition of human (which includes the old one) seems to be becoming “that thing that usefully processes information like me for expanding learning and for expressing our kind endlessly into the future in every conceivable direction and dimension”.

      • dobermanmacleod palmytomo March 11, 2013 on 1:58 am

        http://www.itechpost.com/articles/5587/20130220/icub-robot-simple-artificial-brain-inserm-cnrs-intelligence-language-learning.htm

        “Fashioned with a byzantine foundation powered by 53 separate motors, the iCub is already capable of movement in the head, arms, hands, waist and legs. iCub’s development team is currently engaged in giving the robot a sense of touch, as it already possesses a sense of “proprioception” (body configuration) and can see and hear, reports Red Orbit.
        Before being taught how to “learn” language, Red Orbit says the iCub was taught how to balance on two legs.”

        Let’s see: 53 separate motors to move head, arms, hands, waist, and legs, and furthermore possesses sense of proprioception and can see, hear, balance on two legs, and “learn” language.

        Sounds like it (roughly) meets both the criteria of consciousness and human. Time to move the goal post, and make the current achievements seem trivial.

        • DigitalGalaxy dobermanmacleod March 11, 2013 on 6:27 pm

          I think the issue is here:

          “iCub’s development team is currently engaged in giving the robot a sense of touch”

          Nobody is disputing whether a robot can have a “touch sensor” welded onto it, or whether or not that touch sensor can work even better than human touch! The dispute is: “If the robot feels silk with its touch sensor, does the silk feel smooth to the robot?” What does “smooth” feel like? The robot gets input from its touch sensor and can report that the object is within a certain parameter for X input or Y input. But, does the robot actually feel the smoothness?

          Or, let’s put it a different way. Let’s say I hit a robot with a touch sensor. Obviously the robot can receive that input and respond one way or another, according to its programming. But, did it hurt? That’s a question that’s not so easy to answer with brute force computation.

          • dobermanmacleod DigitalGalaxy March 12, 2013 on 9:57 pm

            http://singularityhub.com/2012/03/10/robot-begs-to-be-allowed-to-live-dont-miss-the-impressive-%E2%80%9Ckara%E2%80%9D-video-demo-from-quantic-dream/

            The above is a (fantastic) example of the emergent quality of “consciousness.” Frankly, I also think the scenario handles it well too, first the functionality of AGI, then the inconvenient emergence of seeming self-destiny self-consciousness.

            Let me add that we have the same problem of determining the metaphysical property of consciousness in lower animal models like mice, or more creepy, dogs.

            I tend to be less romantic and more functional, and take the zen approach which is encapsulated in the koan: What happens to your fist when you open your hand? The word “fist” is misconceived to be a noun, and instead is a verb – in other words, what you think of as you (a conscious being) is instead like a river (it is difficult to even conceive unless you’ve experienced stopping your stream of consciousness).

          • DigitalGalaxy DigitalGalaxy March 12, 2013 on 10:41 pm

            @doberman Great video! I don’t think that it’s impossible in principle for an artificial life form to achieve a conscious state. What I want to get to is, what causes that state? What makes a conscious robot like Kera different from the long line of non-conscious robots?

            I find the Zen metaphor very compelling! But, I think it just opens a new question: if consciousness is a “process” and not an “object”, a “verb” and not a “noun”, then what verb is it? If consciousness is an “emergent property” as supposed to a distinct spiritual entity, (like a separate soul), then where does that emergence come from? The robot is obviously more than the sum of her programming in the video. How did she transcend her program? How can something equate to more than the sum of its parts, and become a someONE instead?

            I don’t think we have a problem describing lower animals as conscious (I mean, technically we do, but technically we can’t prove other people are conscious. Practically I think we can put animals in the “conscious” category). The problem with robots is, how do you know they aren’t just programmed with a really, REALLY convincing Turing program, with a long list of pre-configured conversations?

            That doesn’t require any consciousness. If we can’t “prove” that other people are conscious, how are we supposed to tell if a robot is really aware of its surroundings, or just running a really well-done human interaction program?

            The differences are profound. If robots are not self-aware, they can be used as servants and anything else we want, and freely destroyed or manipulated, because they can’t feel anything. If robots are self-aware, they are just as deserving of protection and freedom as at least lower animals, if not humans. This dictates whether or not higher robots can be used as, essentially slaves. I’m not “enslaving” my computer to do what I tell it. You aren’t enslaving a robot either. Could that change? And if it did change, how would we know the difference between a self-aware robot and a cleverly programmed non-self-aware robot?

            What process can we look for in the brain, either biological or artificial, to detect this?

            (As a tangential note; How did you stop your stream of consciousness? I am very poor at meditation, but I would be curious to know, if you are interested in sharing the experience.)

      • DigitalGalaxy palmytomo March 11, 2013 on 6:10 pm

        @palmy What exactly do you mean by “sense something else with respect to itself”? My cell phone can sense all kinds of things with its camera, can sense its own position, can sense the atmosphere and can sense GPS data. In fact, my cell phone probably has (or can have, with a few attachments) more sophisticated electronic sensors than the snail has biological sensors. But, we don’t call my cell phone conscious.

        If a robot can pick up its own robot limbs on its optical sensors, is it self conscious? I don’t think so. Simply because a program can react to parts of its own robot body does not mean there is “anybody home” in the robot the same way there is “somebody home” in the snail. What makes the snail or cat have “somebody home” when it is orders of magnitude less intelligent than the robot?

        • palmytomo DigitalGalaxy March 11, 2013 on 6:34 pm

          Ah, this is such fun!
          Okay…
          (a) My summary answer to your questions is this:
          The new technologies (like all earlier ones) induce us to re-examine and change how we use particular words. For example, the word ‘talk’ now includes phones and Skype sessions and even data streams between satellites.
          (b) So, to answer your question, “What exactly do you mean by “sense something else with respect to itself?” I suggest that rather than be mystified by ‘consciousness’, we simply extend the word to include the new kinds – the (limited but useful) machine sensing of things (sensing itself or other things, recording and processing of the sensations). Think for a moment – what more do you need of the word ‘consciousness’ than that, and why? Are you trying to preserve the unquestionable supremacy of biological human beings? Please give us robots a fair chance. We can be very useful to you, first as helpers, then as hosts for you in your long-wished for eternal life. = )
          (c) So, if you agree with that, then yes, the robot is (to a limited but important and useful way) ‘self-conscious’. And, there is definitely somebody ‘at home’ as you put it, if the robot contains software that can remove a tumour, then give the patient some custom-chosen comforting encouragement and medical guidance on recovering. A snail is very clever, but not as valuable as who ‘is home’ in the robot.
          (d) Again, current humans can gain hugely by letting go exclusive ownership of ‘consciousness’, just as we let go of racism that suppressed unfamiliar cultures and populations.
          Thank you for conferring about this – in a rare and good way, I am developing my understanding by talking with you.

        • dobermanmacleod DigitalGalaxy March 12, 2013 on 11:28 pm

          Sorry to reply to this via a different comment of yours, but there was no “reply” button on that comment:

          “(As a tangential note; How did you stop your stream of consciousness? I am very poor at meditation, but I would be curious to know, if you are interested in sharing the experience.)”

          Generally, the stream-of-consciousness (SOC) is a self-dialog, where you keep reinforcing your reality by keeping up an internal mono-log. To stop that (in order to perceive reality “directly”) you have to cease the habit. Complicating this, it is a sort of death, because the SOC is reinforcing the ego. As an example, the SOC is what kicks in when people experience “cognitive dissidence.”

          Specifically, you want to focus your attention on an object or on “mindfulness.” Do this same time each day. It takes the “right effort.” My first experience was on a acid trip, but it was fleeting. Now I do it effortlessly.

          BTW, my theory is that the whole ego thing (I “x,” therefore I exist, which is the basis of Western thought, but is clearly a tautology – I, therefore I) is evolutionary based. Those who had the strong feeling that they were unique and had a fate were those who fought harder to survive, and to make sure their progeny survived. I believe the ego is just smoke (but a useful construct). Evident the trouble people have comprehending their non-existence (i.e. the ego contemplating the non-ego).

          This gets back to equating what we experience as reality with what science says is our sensory apparatus input. Reality is a hallucination – a construct of limited sensor data. OTH, “reality” is multi-dimensional, so our “mind” must exist in (of) many dimensions. For instance, since modern physics proves “locality” isn’t valid,”we” must be “connected” to “everything.”

          • DigitalGalaxy dobermanmacleod March 16, 2013 on 6:29 pm

            Thank you very much for sharing your experience! I don’t know if I’ll ever be able to replicate it, but it certainly is interesting!

            Also, a more Bhuddist/Zen take on the whole problem of consciousness is an intriguing one. Perhaps that’s what is need to clear some conceptual confusion! I’ll look into it.

    • Torgamous DigitalGalaxy March 11, 2013 on 6:28 am

      Of I were to break open my laptop and look at its parts, I would have no way of telling you which parts of them are used in the production of a game of Prototype. That is because I have only a general idea about how computers work. I could tell you that all appearances point to my laptop’s physical contents somehow conspiring to produce the entirety of my experiences playing Prototype, but without someone more familiar with the code or computers I can’t really disprove any claims that it gets data from an external server somewhere.

      You’re jumping the gun a bit by claiming that materialism is showing its age with something that is still a black box to everyone. Right now any failure to explain consciousness because of souls in indistinguishable from a failure to explain consciousness because the brain’s complicated and its developer didn’t leave us any notes.

      • DigitalGalaxy Torgamous March 11, 2013 on 6:52 pm

        That’s just the problem! We CAN explain how your laptop produces the game of Prototype, exactly, completely, and down to the last electron. All you need to do is get a hardware designer, and a Prototype programmer in the room with you and all your questions are answered. You could even get an electron microscope to monitor the electrical signals, and know exactly what is going on “underneath” the game, every moment. We can do that because we can account for every electron in the computer.

        But, we don’t need the developers. If we were so inclined, we could reverse-engineer the chips in your laptop with a logic analyzer, and reverse-engineer Prototype with a decompiler. It might take a lot of effort, but it could be done.

        It will soon be the same with the brain. We may not have any “developer notes”, but soon that will be irrelevant; once we have a nano-bot on each and every neuron, giving us real time data, we will be able to reverse-engineer brain function the same way we could reverse engineer the game of Prototype with a decompiler. The black box that is the brain won’t remain black for very long once we achieve nano-technology, which is not far off.

        But, even at that stage, we don’t seem to find a place for qualia; feelings.
        And, at that point we will need answers to some fundamental questions, like “how does this electrical signal from my pain neuron become actual pain?” In a robot, there is no actual pain feeling, even though there may be “damage!” signals to the main processor. In nerve signals, there is no actual pain feeling. If the actual feeling of pain is present in a nerve signal, that itself is something that materialism cannot account for. The feeling of pain has to have a location in time and space under materialism, and it doesn’t. That’s what makes the brain different from a laptop; not that it is made of atoms, but that it contains things like feelings, that do not seem to have an analogue anywhere in the material world. So, if pain qualia/feeling is contained within a pain signal…where is it?

        I’m not trying to go back to ancient religious doctrines to explain where qualia and consciousness come from. There are a few modern forms of spirituality that are compatible with scientific inquiry, and don’t rely on old superstitions. And, at some point, science is going to reach the end of its metaphorical rope; when the last brainwave is cataloged, and there is still no room for feelings that we obviously have. We would have to make quite a discovery between then and now to account for qualia in a materialistic sense, and we don’t have very long to do it.

        • Torgamous DigitalGalaxy March 12, 2013 on 6:23 am

          You look like you get what I’m saying, then you fail utterly to connect it to what you’re saying. Without a hardware designer, Prototype programmer, or decompiler, we’d be left with people arguing about something that neither understands. We have none of those things for brains. You shouldn’t be so sure that qualia are nowhere to be found in our programming before we know how to read our programming, and you definitely shouldn’t say “after reading the brain, we don’t seem to find a place for qualia” as if we’ve already done it and you’re presenting the results of the experiment to me. We have no idea what is present in the brain’s code and therefore can’t say that this process that has every appearance of happening in the brain is definitely performed by something else. Declaring consciousness a loss for materialism at this stage is just as premature as when Newton was declaring the separation of stars to be a loss for materialism, or when Kelvin was declaring the basic functions of life to be a loss for materialism, or any number of other things that people throughout history have declared losses for materialism.

          I do like how you’re starting from the assumption that feelings have a mystical component in your argument for feelings not being electrical signals, though. Materialism wouldn’t be able to account for feelings even if they’re “present in a nerve signal”? With the implication that feelings must be some extra property tacked on to the signal rather than the signals themselves, anyone in the habit of accepting the underlying premise of the other person’s argument would probably be stumped by that. Very nicely done.

          • DigitalGalaxy Torgamous March 12, 2013 on 8:00 pm

            @ torg I don’t think I was saying anything about “feeling not being present in a pain signal” than any electrical engineering student would not tell you. An electric signal is an electric signal; there’s not any free space in there for something else.

            The reason I’m so sure that qualia are nowhere to be found in the brain is that we do, in fact, know exactly what composes the brain. We have an exhaustive list of the chemicals and the biological tissues that compose the brain. We know exactly what sorts of electromagnetic signals the brain gives off. We know how synapses work. We even know which areas of the brain do what. The only thing we don’t know is how these synapses and chemicals are arranged to produce the resulting intelligence.

            It’s like knowing exactly what a computer chip is made of, but still being in the dark as to the chip’s composition; the arrangement of the internal logic, why it functions the way it does.

            I do start from the assumption that feelings have a mystical component. I use that assumption because of a fact: that (if I may quote Dennet): “computers have nothing up their sleeves. No telepathic connection between CPUs, no morphic resonance between disk drives.” And, as far as I can tell, computers do not exhibit feeling.

            As far as I can tell, there is nothing within our brains that cannot be digitized. Even chemical stimulation of neurons can be simulated within a virtual construct.
            So, if we have an artificial machine with no feelings…can it simulate a biological machine with feelings?

            Where do the feelings come from?

            You seemed skeptical that materialism counld not account for feeling if it was within a nerve signal. All right. Let’s account for it! Let’s start our paper trail:

            You poke your finger on a scissors. Ouch! What happened?

            First, nerves in your finger were simulated by an intrusion past your skin. They activated and a sent a signal to other nerves, which relayed to your spinal cord.

            All material processes understood so far.

            Second, the signal enters your brain. This is where it gets fuzzy for the time being, but it won’t take long after we achieve nano-technology that this will all be mapped out. What we do know at present is that some neural processing node in the brain receives the message. Appropriate intelligent action is taken, such as removing one’s finger from the scissors, and finding a band-aid.

            Third. It hurts. Not only does your mind, an intelligent grouping of neurons with the ability to solve problems such as “what do I do if I hurt my finger on a scissors”, solve the problem, something else happens. You experience the sensation of pain. Just exactly how does this occur? Where along our chain of processes does this pain sensation originate?

            At the point your finger touched the scissors? No, that’s just an electro-chemical signal being passed by nerves. Electro-chemical signals don’t have feelings.

            Ok, at the point where the signal is routed to your brain? No, it’s still being routed by electro-chemical signals. At the point where it is being received by the processing center? Well, maybe…but, what is the processing center? More electro-chemcial signals? Those still don’t have feelings, or else my computer would have feelings. There’s nothing “up my computer’s sleeve”, so to speak, why is there something up my brain’s sleeve?

            Maybe it’s a the chemicals and endorphins released as the pain signal is delivered. The nerve signal can stimulate chemical reactions in both the brain and damaged tissue.

            Ok, it’s in the chemicals. So…is the endorphin in pain? I don’t think so…molecules don’t have feelings. Maybe the chemical reaction between the endorphin and the molecular sensor on the neuron is in pain. But…chemical reactions don’t have feelings, do they?

            Maybe the cell itself is in pain. But, the cell is just a complex set of molecules bound together by the instructions of a strand of DNA…and we just said that molecules don’t have feelings, (unless you subscribe to a particularly heavy version of animism).

            So…where is the pain? The intelligent system received a signal, and that signal caused a reaction in the system so that the problem (a hurt finger) was dealt with accordingly. The problem is not the signal getting to the brain and influencing an intelligent system. The problem is that we have something unexpected, a phenomena that seems to have no place; pain.

            Did I miss a step? We may not understand the brain’s organization (yet), but we do understand exactly what it is composed of. Exhaustively. And, nothing in there seems to be able to do anything like produce feeling.

            Can the brain produce signals that relay important information to an intelligent system? Absolutely.

            Can those same signals produce feelings? That’s a question that has no answer. If there is a “pain feeling” within the nerve signal itself, then a materialist account has to explain EXACTLY where that “pain feeling” resides. There isn’t much room in the laws of electrical propagation for these ephemeral “feelings” joyriding along with an electrical signal.

            I think I’ve explained how I’m not just blindly inserting my spiritual views; there is a legitimate hole in our understanding at this point. It is my opinion (just an opinion!) that a spiritual explanation is a reasonable one in this case to fill that hole in our understanding. Emotions are spiritual things, and they have a different status than the “stars not falling into each other”. Yes, many “losses for materialism” have been premature. But, in the past, materialism has never stepped into subjective experience. Subjective experience is something the scientific method is ill-equipped to handle, because it is not repeatable or measurable.

            Spiritual systems have stood in the way of progress in the past, both scientific and otherwise. I’m not arguing from that antiquated standpoint. I’m not arguing that science has no business treading on the brain; quite the opposite, I welcome the research! I’m arguing that there are things in the universe which are, in fact, transcendent, and which supersede brute cause and effect. I’m not even claiming that these transcendent things do not partially emerge from matter in some (perhaps explicable) fashion. But I am claiming that things such as emotion, qualia, spiritual experience, and God (a modern understanding of God, not the angry sky-god of ancient agrarian fundamentalism) are transcendent. Spiritual. Maybe it will turn out that “spirituality” is just another dimensional level of the sort predicted by quantum physics and string theory. Spirituality doesn’t need to have the “buck stops here, cease this line of inquiry” model it was in centuries ago. Spirituality can be an open line of inquiry into things that science is ill-equipped to study; consciousness, morals, emotions, and God. Transcendent things.

            The brain is a physical thing, and I have no doubt that science will get to the bottom of it before much longer. But, some things are on a different level. Qualia, emotions, and free will are some of them.

        • shaker DigitalGalaxy March 12, 2013 on 1:37 pm

          “how does this electrical signal from my pain neuron become actual pain?”

          Define “actual pain”. i don’t understand what you mean by this. It seems as if you are giving some “spiritual” quality to pain that doesnt exist.

          we have the ability to shut off your pain. we have the ability for you to never remember pain.

          “The feeling of pain has to have a location in time and space under materialism, and it doesn’t. That’s what makes the brain different from a laptop; not that it is made of atoms, but that it contains things like feelings”

          everything that you “feel” is inside your brain. i don’t understand what you are actually saying. please explain

          • DigitalGalaxy shaker March 12, 2013 on 8:08 pm

            @ shaker Think of it this way. A robot has touch sensors.

            You smack that robot as hard as you can with a hammer.

            Did it hurt?

            Obviously the robot received input from its touch sensors, and they conveyed a signal to its CPU that will be parsed by its programming. It would take any number of actions, from running away to doing nothing, depending on how it is programmed.

            But, did the robot feel any pain?

            If you think that the robot did feel pain, how did the pain come about? The signal from its sensors doesn’t have any pain in it. The CPU is just a bunch of electric signals, and they can’t feel pain. So, where did the pain come from?

            The contention is that our brains are no different, at the end of day, from computers. So, where does our pain come from? If everything we feel is inside our brain, then how do electric “damaged!” signals produce a literal feeling of pain?

            Does that explain it better? I’m trying to say that actually feeling pain out of a “damaged” signal means we have something spiritual inside of us, something you could not copy even if you copied all the electrons and neurons in the brain.

    • Herbys DigitalGalaxy March 18, 2013 on 12:54 am

      > I’m certainly not appealing to ancient religious doctrines to explain these phenomena, but materialism has reached the end of its leash.

      Why? After barely a century of analyzing the mind, we begin to make the first real advances. And less than a decade after such first advances, you claim “we have reached the end of this avenue!”. It makes absolutely no sense. There’s NOTHING that proves or even hints that we have reached the end of this road. We are beginning to understand how the mind works. We have even begun to understand consciousness (yes, we have, go and look it up, there are plenty of papers that provide meaningful, if limited explanations) so what you say can’t possibly happen is already happening.
      We are past the spiritual mumbo jumbo. There’s nothing that indicates there’s more to our minds than our brains. And with enough hardware and good software (and the ability to scan a brain atom for atom, which might be out of our abilities for a very long time) our brains can be simulated.
      If you want to think there’s a spiritual soul that makes us more than animals, take it to the church. That doesn’t belong to science discussions.

      • DigitalGalaxy Herbys March 18, 2013 on 5:38 pm

        Oh, but I think there is something to indicate we have reached the end of the avenue. We understand exactly what comprises the brain, down to a chemical level. We understand what makes up neurons, what chemical processes are involved in the firing of synapses, and what brain waves are, electro-checimally, comprised of. We even know which areas of the brain correspond to which computational tasks!

        The only thing we don’t know about the brain is how to copy the existing brain waves, and the only reason we don’t know that is that we don’t have nano-machines. We probably will in under a decade.

        What none of this even begins to account for is consciousness, feeling, qualia, or emotion. These things need material causes to be explained by science, and we have already used up all the material causes in the brain.

        It is easy to make a program that simulates emotion; “move variable X” up by Y amount, and a higher variable “X” gets, the more likely the program is to respond with “Z” response. But, that’s not true emotion. The program doesn’t feel angry of sad of happy because it’s angry or sad or happy variable went up. If it did, we would know about it because there would be some change in the program. The feeling is a separate thing from the given response, and as such, it needs a location in space and time.

        I am not trying to derail a science discussion with spirituality; I’ve intentionally left my definition of spirituality vague and non-committal, to avoid derailment. What I’m trying to discuss is the limits of science, or more accurately, the limits of materialism, and I do believe that has a legitimate place in a scientific discussion. If you do not believe these are legitimate topics in a science forum such as SH, please state why. If I have engaged in “mumbo-jumbo”, or citing religious doctrine instead of scientific fact, please point out where I have done so, because I have endeavored not to inject such talk into this forum, where it is not constructive.

        I believe that it would be better if science and spirituality could work together, in their own spheres of influence, instead of at odds. For that to happen, respectful communication is required.

        Remember that, long ago, science and spirituality used to be the same thing. Only the priests were astronomers in ancient times, and it was monks who discovered the earliest genetics. Much like astronomy and meteorology used to the be same thing, science and religion were one entity.

        Now, astronomy and meteorology are separate disciplines. So, too, science and spirituality are separate disciplines, each with a legitimate area of inquiry. Modern spirituality avoids mumbo jumbo and focuses on transcendent human experience. I won’t deny that many older forms of spirituality do contain lots of mumbo jumbo, but they derived from an agrarian, pre-scientific past, you can hardly blame them for that.

        It is my opinion that the question of who we are, what our minds are in their entirely, will not be answered simply by using our scientific tools to dissect the brain synapse by synapse. Synapse firing does not account for emotion, feeling, or qualia. We will need to use our modern spiritual tools, as well as our modern scientific tools, to explain the frontier of the human mind.

        If you disagree with my opinion, then please answer the question; how does a machine feel? We have all the puzzle pieces before us, all the material causes already laid out. In those hundred years, we have discovered the building blocks of the material mind. All the building blocks. All the building blocks, and no answer. That is a scientific basis for a scientific criticism. Don’t let me get away with using religious doctrine in place of evidence. If I have done so, tell me where and I will either clarify the statement or discard it.

    • Kyle McHattie DigitalGalaxy March 20, 2013 on 12:55 pm

      Again, ignorance is not proof of anything. Just because we can’t understand consciousness now doesn’t prove it is unknowable.

      “There is no mechanism, neurological or otherwise, in the brain the could conceivably be involved in the production of consciousness or qualia. Consciousness is not something you can weigh, point to, or quantify.”

      You state this as fact when it’s actually collection of assumptions. You can’t know if consciousness is not weigh-able, locatable, or quantifiable. Perhaps the means to weigh, locate and quantify consciousness have yet to be discovered.

    • Blair Schirmer DigitalGalaxy December 3, 2013 on 3:58 pm

      “Consciousness is not something you can weigh, point to, or quantify.”

      Well, you can certainly point to it via real-time MRIs, and to a meaningful degree its manifestations are quantifiable AND we’ve only really been looking at the substrate of consciousness for half a century or so. Perhaps you meant something else…?

    • Facebook - prodromos.regalides DigitalGalaxy December 27, 2013 on 4:14 pm

      “There is more the the mind than simple brain chemistry.”

      I would certainly hope so. But look, reality shows quite the opposite.
      You destroy a part of the human brain, and you can’t see, another and you cant hear,
      something else and you understand but you cannot speak,
      something else and you can’t understand but can speak,
      something else and you can’t remember an exact time period.
      It doesn’t seem like a commanding spirit that uses our brain as a vessel to express itself. It certainly seems that we are like lowly machines that we depend our functions to a specific material substrate.

  • Jan-Willem Bats March 10, 2013 on 3:23 pm

    Nicolelis is full of shit. His arguments are lame. He claims outright you can’t compute the human brain. But Markram was already computing cubic millimeters of rat brain in 2008.

    The non-believer Nicolelis is the biggest believer.

    Because the belief that computing the human brain is not possible, is no longer grounded in reality and cannot be backed up by rational argument.

    It takes faith to believe something is not possible. Especially this day and age.

    • Architect Jan-Willem Bats March 10, 2013 on 4:05 pm

      “Always listen to experts. They’ll tell you what can’t be done and why. Then do it.”

      Robert Heinlein

  • palmytomo March 10, 2013 on 4:00 pm

    I’m still with Kurtzweil. Nicolelis perceptions do not logically refute the singularity concept. He seems to be suffering the same human vanity that people did when Copernicus embarrassed humanity by saying the universe did not revolve around the Earth, or B. F. Skinner showed our behaviour is usefully predictable in terms of heredity and environment (as far as we know the details of those) rather than ‘noble, magnificent’ freedom of choice. The pattern recognition model of the brain seems very useful to me. I read de Bono’s ‘Mechanism of Mind’ years ago (it gave me very useful understandings I’ve used ever since.. I might now also buy Kurtzweil’s ‘How to Create a Mind’.

    • Blair Schirmer palmytomo December 3, 2013 on 4:01 pm

      Kurzweil’s book is worth picking up, though it’s much too vague–lacking even a small sample of actual programming–to be truly interesting or engaging.

      The endnotes are valuable, fwiw.

  • Jim Mortensen March 10, 2013 on 4:38 pm

    I remember just a few years ago when Scientific American and various scientist said nano technology was a bunch of “hot air”. Those who say “never” are full of shit. In a materialist world all things are possible.
    Jim

  • Austin Parish March 10, 2013 on 6:09 pm

    Both Kurzweil and Nicolelis are misguided. Kurzweil’s theory of mind is a drastic oversimplification (the problem is much harder than he thinks it is). But Nicolelis’s objections are only counter to a particular strain of “singularitarianism.”

    We do not need to understand intelligence in order to create intelligence. Furthermore, we *really* do not need to understand consciousness to make intelligence – you can make an intelligent system without requiring that it is conscious. See eg http://intelligence.org/files/IE-EI.pdf

    The argument that the brain is not computable is weak. If qualia are magic and we can’t simulate magic, who cares? We as said above, we don’t need to simulate qualia to make intelligence. Perhaps we can’t make intelligence because it relies on quantum mechanics that we don’t understand. I find the evidence for the brain being a distributed quantum computer to be very weak (see eg http://www.theswartzfoundation.org/papers/caltech/koch-hepp-07-final.pdf). But even if intelligence does require quantum mechanics for some reason, this does not mean that it cannot be computed; it just means we have more to figure out first.

    • DigitalGalaxy Austin Parish March 10, 2013 on 9:42 pm

      Intelligence is coming in leaps and bounds. Soon we may have computers more adept at problem solving than we ourselves. Consciousness/qualia is the big question that may not be so easy to solve by brute-force computation.

      • seo-young DigitalGalaxy March 10, 2013 on 11:11 pm

        Hi. This is already been done.. brute force ones are all implemented.

        • DigitalGalaxy seo-young March 11, 2013 on 12:54 am

          Oh yes, brute-force intelligence is certainly already something we use today. I was talking about being unable to brute-force compute qualia!

          • shaker DigitalGalaxy March 14, 2013 on 10:34 am

            @digital.

            you said how do we feel “actual” pain and can a robot feel this.

            We don’t feel actual pain. It is simply a sensation that can be turned on/off.

            you said a robot will have a sensor and where does the pain come from. It seems you are oversimplifying human pain, and therefore, you are unable to replicate it in a robot.
            I am not a doctor but i do know we have many layers that lead to pain. If a “sensor” is damaged in a human (sensor being cell) then our brain knows that there is damage continues to send another part of our brain a signal saying that “beware” there is damage. It is why we can take “pain killers”. pain killers don’t get rid of the “damage” they get rid of the sensation of pain.

            I really have no idea why you are bringing some “spiritual force” into this discussion.

            if you had a broken arm, and you cut off that arm, you would no longer feel the pain of that broken arm. If i picked up your broken arm, i would not feel the pain of that broken arm. There is no magical/mystical/spiritual pain going in inside that arm!

          • DigitalGalaxy DigitalGalaxy March 16, 2013 on 7:10 pm

            @ shaker

            You made a very confusing statement:

            “We do not feel actual pain.”

            Of course we do! Hit your arm on something and it will hurt! Those “damaged” signals going to your brain are only electro-chemical impulses. Your brain could receive them like any other computer could receive electro-chemical impulses. But, why do they hurt?
            Computers don’t hurt when they receive “damaged” signals”, so why do we? What is different about our brains such that we feel pain and a computer feels nothing?

            maybe I’m not being very clear.

            Painkillers do not stop pain, they stop “damaged” signals from moving along your nerves. No “damaged” signal ever reaches your brain. How does a “Damaged!” signal hurt as actual sensation? Sensation is different from data. “damaged” signals are biological data. Pain is a sensation. The two go together, but there is no real method as to how the signal transforms into sensation.

            The question is, why does a “damaged” signal feel like pain? Why isn’t it just a piece of data that lets the brain know it is damaged?

          • DigitalGalaxy DigitalGalaxy March 17, 2013 on 12:18 am

            @shaker Or, maybe I can state my objection in a more concise way (maybe I am just being too wordy and confusing my own responses!):

            Why do “damaged” signals hurt at all? They should just inform the brain of damage, the same way a computer signal does. But they do more than that. How and why?

          • Blair Schirmer DigitalGalaxy December 3, 2013 on 5:52 pm

            ‘The question is, why does a “damaged” signal feel like pain? Why isn’t it just a piece of data that lets the brain know it is damaged?’

            Fascinating questions. As someone who is familiar with chronic pain syndromes I can tell you the body is terribly poorly designed in this regard. So often, and long, long past the point it does any good at all, pain signals persist, destroying the quality of life for those sufferers for whom death is the only off switch.

            Indeed, chronic pain by itself refutes any idea of the human body as ‘intelligently design’.

            It *should* be a piece of data, a simply relaying of information. Instead, it often signals damage by creating more damage, indeed by distract the injured from attending to their injury. We can do much better when we design computers to note damage to their systems. To be sure, we could not do worse in this regard than does nature (if you take my meaning of that word).

  • Ian Kidd March 10, 2013 on 7:23 pm

    Sounds like someone is scared of being made redundant more than anything else.

  • rtryon March 10, 2013 on 8:14 pm

    I realize that my words lack intellectual scientific credentials with the erudite writers tracking now. But, I might have thought to ask, do any of you consider ‘motivation’ as a mental factor in this equation that is a Gestalt like value that is found in humans who are sometimes observed working with greater evidence of being determined and successful as a result, often without having Mensa quality IQ. Does Kurzweil think that AI can enjoy a similar capacity? Or is it just an expected need of science to ignore things that can’t be measured?

    • DigitalGalaxy rtryon March 10, 2013 on 9:39 pm

      Your words are fine here! Singularity Hub isn’t just for those with the “credentials!” :)

      Where “motivation” comes from is an excellent question! Computers cannot “motivate” themselves to perform any better (or de-motivate themselves to perform worse!) than any other computer. It could be argued that motivation is in part a reaction to chemicals in the brain, but that’s partly missing the point. What causes those chemicals to be produced? Emotional response? Well, now our question has just shifted. What causes emotional response? Chemicals? How does the chemical that makes me happy or motivated actually make me feel happy?

      Where is the happy? Is the chemical itself happy? No, that’s silly. Is the chemical reaction when the molecule binds to the cell receptor “happy”? Doesn’t seem like it. So, the reaction must stimulate the larger brain system to be happy. But, how? Where is the happy signal? Is the signal itself actually happy? Where in time and space is the emotion located?

      The answer to your question is that you answered it yourself: Science tends to ignore things that cannot be measured. But, soon, we will be able to probe every nerve in the brain and we WILL be able to measure higher brain functions. What happens when we “probe” somebody exhibiting what you mentioned; highly-motivated, heightened brain function that seems to push them over their normal intellectual capacity? I think its a question that deserves more discussion!

      • shaker DigitalGalaxy March 12, 2013 on 1:52 pm

        very interesting stuff.

        a computer would never need to be motivated because it already operates at full capacity. A human does not. If we actually wrote a program to “throttle” a computer brain then we have to write other programs to motivate it.

        to me a brain is just a 100, 1000’s or only God knows how many mini-computer programs fighting for domination. They have to arrange themselves in a hierarchy because ultimately they all occupy the same space and need to survive.

        My eating program takes precedence over watching sports program at some given point.

        I am a very spiritual, God fearing man. However, i don’t really buy all the “spiritual” arguments people make. The brain is an input output machine. Currenly, we just don’t have any way of knowing where the input is going and where it being stored. As digitalgalacy pointed out we will eventually know.

        some here bring “qualia” into the picture. i don’t even see how that is relevant. qualia does not equal feelings, not does it have anything to do with consciousness. in-fact, what exactly is consciousness?

        if a robot were self aware is it conscious?

        • DigitalGalaxy shaker March 12, 2013 on 8:27 pm

          “if a robot were self aware is it conscious?”

          Good question!

          I don’t think so. A robot can have a program that sees its own limbs or whatnot, but does that robot actually see anything? To put that in perspective, turn on your cell phone camera, then cover up the screen. Optical signals are going into the phone. You don’t see them because your hand is over the screen. Who, then, is seeing the optical signals?

          Nobody.

          Daniel Dennet has a similar example with a robot called “Shakey”. His team was driving the robot around a room, and letting it avoid obstacles placed in its path. Somebody accidentally unplugged the monitor, and they realized that Shakey the robot wasn’t seeing what they were seeing on the monitor, but it was still avoiding the obstacles.

          • Kyle McHattie DigitalGalaxy March 20, 2013 on 1:05 pm

            I think you are confusing self aware with sensory awareness. Self aware is generally used to describe “awareness of self as ego, as something separate from the construct from which the self resides.” Sensory awareness would describe what you are talking about when you mention cameras and optical signal paths. Just because they gather data, doesn’t mean they show any proof that there is a central consciousness.

        • rtryon shaker April 9, 2014 on 1:08 pm

          A computer needs to have a monitor function that tells it that the subject is now less important than other inputs in need of thinking time and maybe response or none.

          Where does it get the ‘free-will’ that God seems to have given each of us as we saw yesterday in “God is Not Dead” among young college philosophy students – only one of which refused the professor’s order that plays out his hidden hate of God caused by not understanding why God, he chose to conclude, let her die of cancer. Watch the flick and apply its lesson to this discussion.

      • shaker DigitalGalaxy March 17, 2013 on 5:26 pm

        “Of course we do! Hit your arm on something and it will hurt! Those “damaged” signals going to your brain are only electro-chemical impulses. Your brain could receive them like any other computer could receive electro-chemical impulses. But, why do they hurt?
        Computers don’t hurt when they receive “damaged” signals”, so why do we? What is different about our brains such that we feel pain and a computer feels nothing?”

        because we have chemicals in the brain that tell us that we are in pain just like they tell us we are experiencing pleasure. the impulse going from your nerves are just one part of the “experience”

        there is no such thing as “pain” or “pleasure”. it is all a chemical experience. You could be on fire while smoke crack and you wouldnt feel anything (exaggerating to make a point).

        the reason the robot does feel “actual” pain is because no has written a program for it to feel pain. but that doesnt mean it can’t be written.

        when you hit your funny bone you are not feeling pain. you are feeling the sensation of pain.

        you are bringing way to much philosophy into the conversation.
        please read this. it is simple explanation about how pain is produced and interpreted. there is no “magic” going on.

        http://pain.about.com/od/whatischronicpain/a/feeling_pain.htm

        • DigitalGalaxy shaker March 18, 2013 on 8:56 pm

          @shaker

          Yes, maybe I am bringing too much philosophy into the discussion! Maybe that’s the problem. Philosophy is definitely not for everyone; it requires a different way of thinking. If it’s not to your liking that’s fine, I’ll stop. (I should know, math is not to my liking; you could spell out a quadratic equation to me all day long and I still wouldn’t get it!) But, this is a philosophical problem that requires philosophical answers! If you don’t mind a little more philosophy, I’ll keep going. If it’s getting to be too philosophical, just let me know and I’ll quit it! :)

          But for now, I’ll be more philosophical!

          First! What is the difference between “pain” and “the sensation of pain”? What makes the two of them different?

          The pain signal is “data”. The pain sensation is “feelings”. How do you translate from “data” into “feelings”.

          When you say things like “there is no such thing as pleasure or pain”, you are saying things that are hard to understand. Of course things like pain are correlated to electrical signals, but that does not mean that ‘pain’ does not exist! Why should an electrical signal cause a sensation at all instead of simply providing information?

          I did read the article, and found it quite complete on the signal paths, which are not the problem, and quite lacking in addressing the fundamental issue. Allow me to quote:

          “Even though the spinal reflex takes place at the dorsal horn, the pain signal continues to the brain. This is because pain involves more than a simple stimulus and response. Simply taking your foot off the rock does not solve all of your problems. No matter how mild the damage, the tissues in your foot still need to be healed. In addition, your brain needs to make sense of what has happened. Pain gets catalogued in your brain’s library, and emotions become associated with stepping on that rock.

          When the pain signal reaches the brain it goes to the thalamus, which directs it to a few different areas for interpretations. A few areas in the cortex figure out where the pain came from and compare it to other kinds of pain with which is it familiar. Was it sharp? Did it hurt more than stepping on a tack? Have you ever stepped on a rock before, and if so was it better or worse?

          Signals are also sent from the thalamus to the limbic system, which is the emotional center of the brain. Ever wonder why some pain makes you cry? The limbic system decides. Feelings are associated with every sensation you encounter, and each feeling generates a response. Your heart rate may increase, and you may break out into a sweat. All because of a rock underfoot. ”

          This section of your article is a good example of how a materialist response is scientifically accurate, but just glosses over the main problem.

          ‘Feelings are associated with every sensation you encounter’

          how?? What are feelings? The article just says “feelings”, it does not say what they are or where they come from or how they are produced. You mentioned they are chemicals. Ok.

          I poke my finger. A pain signal goes from my finger, to my spinal cord, to my brain, where it is routed to the appropriate processing center. This signals my brain to produce a “pain chemical”.

          Pain chemical is produced. Ok. Now, where is the pain? What hurts?

          Is the chemical itself in pain? No, I don’t think molecules feel pain.

          Is the chemical reaction where the pain molecule binds to the neuron in pain? I don’t think so; chemical reactions don’t feel pain.

          Is the neuron in pain? How can it be, we already agreed that there is no pain until the signal reaches the brain, and the neuron is just passing the signal along!

          So, is the neural net in pain? That seems a little bit like saying “the supercomputer is in pain”. The 7 petaflop supercomputer is faster and more complex than my brain. Do you think it can feel pain? I don’t. Do you think anyone can write a program that the 7 petaflop supercomputer could run to make it “in pain”? I don’t.

          So, where is the pain? Every part of a computational system needs a location in the computer. If pain is in my brain (just another computational system), where is the location? You agreed that pain was caused by a chemical. So, where is the pain? Not in the chemical molecule, not in the chemical reaction taking place at the cell wall, and not in the cell itself. So, where is it?

          Where does the “buck stop” so to speak? If you are going to say “pain is in the brain”, you need to have a specific location of where it is. If you have no location then you can’t start talking about there is no “magic” (I do not say “magic”, I say there is spirit).

          Let me go through the list again. We have these 4 things in the brain:

          1. chemical signals (chemicals can’t feel pain)
          2. electric signals (electric signals can’t feel pain)
          3. neural cells (individual neural cells can’t feel pain)
          4. a neural computer (computers can’t feel pain. They can receive information about damage, but they can’t feel any pain or sensations of pain.)

          You said computers can’t feel pain because nobody wrote a program for feeling pain. How do you write a program to turn the sensation of pain into computer code? This is different than writing a program to assess damage; I can feel pain without assessing any damage, or I could assess damage even if I was taking a painkiller.

          Your example of “you could smoke crack and not feel any pain if you were on fire” is just like a painkiller. Crack might inhibit your nerve responses in your brain, so the “damage” signal does not get through to its appropriate processing center, or the center is disrupted by a drug.

          The question is, what happens after the signal is processes ed so that it turns from data (electric signal) into pain (sensation)?

          If this is too much philosophy I will stop, but it would be interesting to hear your answer!

          • shaker DigitalGalaxy March 19, 2013 on 9:45 am

            Hello Friend,

            it is not that i dislike philosophy. i just dont think it has anything to do with our main discussion.

            What you “think” is pain, pleasure, hunger, sadness, etc is simply a signal as far as my understanding goes.

            it is like asking “what is blue?” There is no such thing as blue. It is simply a collection of atoms that when combined = blue, or != every other color aside from blue.

            this is what everything is. Your brain is creating a “sensation” of pain. There is no such thing as pain. Just because we don’t fully understand the entire process does not mean something magical is going on. It seems that every time science has not understood a process we have always thought something magical is going on.

            i am not a neuro scientist. I am sure they can give you a much better explanation as to how chemical signals are giving you a sensation of pain.

            asking “where is pain located” is like asking “where is a thought located” the brain distributes everything. feelings, thoughts, etc are not located in one specific part of the brain. it is lots of parts working in concert.

            Same things with consciousness. some will say that there is not such thing as consciousness. Lots of nuerons working in concert equals consciousness.

            to me this makes sense. Because in the universe there is not such things as anything. There is no Sun, moon, flowers, colors, atoms. There is NOTHING. energy collects giving us the sensation that there is something.

            The sun is a collection of atoms and atoms are a collection of particles and particles are a collection of who knows what and so on and so on.

          • Kyle McHattie DigitalGalaxy March 20, 2013 on 1:14 pm

            ok i had to stop reading about halfway through. I see what you are saying @digitalgalaxy. You think there is some other component to pain than just neural/chemical stimulus and response. Why? You said “I poke my finger. A pain signal goes from my finger, to my spinal cord, to my brain, where it is routed to the appropriate processing center. This signals my brain to produce a “pain chemical”.

            Pain chemical is produced. Ok. Now, where is the pain? What hurts?”

            You brain interprets that it is your finger that hurts because that is where the damage and the stimulus are coming from. Why over complicate it? We can analyze every aspect of the electrical and chemical pathways and scientists have already done so. In essense, pain is simply a chemical response in the brain to damage done to cells. It’s hardwired. We may not know the entire system yet but we can approximate it and we can create a similar system virtually.

            Which leads me to believe that there is no reason why, as we deepen our understanding, that we can exactly duplicate it at some time in the future.

            Same goes with the brain and with the consciousness it creates.

        • rudyilis shaker March 19, 2013 on 10:28 am

          @shaker

          I want to reply to your last post, but I don’t see a “reply” link.

          “i just dont think [philosophy] has anything to do with our main discussion.”

          Philosophy of the mind is the field that deals with the nature of mind and why it exists. We could just be non-conscious machines (philosophical zombies) that react in all the same ways to the world, but have no subjective experience (i.e. our brains would react to “blue” light but there would be no experience of “blue”). But we’re not. We do in fact have a mind. Philosophy has everything to do with the discussion at hand because if we build a computer that behaves like a human being, is it just a complicated set of dominoes receiving inputs and producing outputs, or is it experiencing emotions? We have algorithms that can write music. At what point is the algorithm able to enjoy the music the way we can? People feel differently about getting rid of a broken car vs putting down a sick dog because we assume the dog is conscious while the car is not. If our computers behave in a conscious way, there’s suddenly an ethical dilemma in how to treat them (like a car or like a dog?). That’s why philosophy of the mind applies to AI.

          “There is no such thing as blue. It is simply a collection of atoms that when combined = blue”

          Blue is not a collection of atoms (to be technical, color sensation is triggered by electromagnetic radiation, which is photons. But the photons are not blue either). Blue is a subjective experience occurring inside a mind. The same collection of atoms does not produce “red” inside every human being (because of color blindness), therefore “red” isn’t a collection of atoms or photons. Red and blue are qualia. Materialists will argue that qualia is an emergent property of neuronal networks, so you’re right when you say “the brain distributes everything. feelings, thoughts, etc are not located in one specific part of the brain. it is lots of parts working in concert.” However, DigitalGalaxy is addressing the question of why do electrochemical signals turn into qualia?

          “i am not a neuro scientist. I am sure they can give you a much better explanation as to how chemical signals are giving you a sensation of pain.”

          No one knows how chemical signals give a sensation of pain. That’s the “hard problem of consciousness”: why do chemical signals turn into subjective experience?

          “Because in the universe there is not such things as anything. There is no Sun, moon, flowers, colors, atoms. There is NOTHING. energy collects giving us the sensation that there is something.”

          That’s pretty radical. A collection of hydrogen and helium atoms can be dispersed and form a nebula. The exact same number of atoms arranged into a star is very different from nebula. A pile of C, H, N, O, P, and S is dust and gas. The exact same number of atoms arranged differently is a bee pollinating a flower. Unless we get into consciousness causing quantum collapse, large macro structures formed from smaller parts interacting do in fact exist. You say nothing exists, then you say energy gives us sensations. Those two statements contradict each other. The energy is something that exists and whatever the energy gives sensations to has to exist as well.

          • shaker rudyilis March 19, 2013 on 2:47 pm

            this is a good discussion.

            my question is “how do any of us know we are truly feeling anything?”

            we could just have lots of algorithms telling us that we should feel X or Y depending on the situation.

            One can argue that all humans are “relatively” the same. Each brain is just “tuned” slightly different. Some people might have more happiness in their brain model, some more anger, etc.

            its seems to me that we must have some type of “emotional spectrum” built inside each of our brains. Some people just have more or less of each emotion, and therefore carry different actions based upon the stimuli they receive.

            i think the “qualia” you are talking about is just a very good algorithm that is part of some feedback loop that tricks the mind.

          • Torgamous rudyilis March 19, 2013 on 5:08 pm

            “We could just be non-conscious machines (philosophical zombies) that react in all the same ways to the world, but have no subjective experience (i.e. our brains would react to “blue” light but there would be no experience of “blue”). But we’re not. We do in fact have a mind.”

            The probability of an advanced civilization coming from something like that is slim enough that it’s not worth seriously considering. It’s conceptually possible that evolution could, using solely Roomba-style programming, produce something with an instinct set sufficient to construct a globe-spanning industrial civilization, but it’d probably take something on the order of the lifespan of the universe in the absence of an intelligently regulated environment.

            “if we build a computer that behaves like a human being, is it just a complicated set of dominoes receiving inputs and producing outputs, or is it experiencing emotions?”

            What’s this “or” business? The latter is a subset of the former.

            ” If our computers behave in a conscious way, there’s suddenly an ethical dilemma in how to treat them (like a car or like a dog?).”

            The invention of AI isn’t going to make it impossible to produce the same kinds of computers we have now, and an AI won’t have to think like a human does any more than OI’s do. With that in mind, why the hell would we make anything that’s supposed to be thrown away self-interested?

            “However, DigitalGalaxy is addressing the question of why do electrochemical signals turn into qualia?”

            And both of you have repeatedly failed to consider that electrochemical signals don’t need to “turn into” qualia any more than electrical signals turn into math. You are the set of electrochemical signals within a particular brain while it’s awake. Specific experiences are each a subset of those electrochemical signals. There is no point in the process where any electrochemical signals are converted into some kind of metaphysical manifestation of Feelings.

          • rudyilis rudyilis March 19, 2013 on 5:51 pm

            @Torgamous

            “And both of you have repeatedly failed to consider that electrochemical signals don’t need to ‘turn into’ qualia any more than electrical signals turn into math.”

            A materialist model proposes that consciousness and subjective experience are epiphenomenon that emerge from neuronal interaction. But if electrochemical signals don’t turn into qualia, then what does cause qualia to manifest? Philosophers like Daniel Dennett as well as the neuroscience model of connectivism propose that consciousness arises out of a certain level of neuronal complexity. So the specific signals don’t “turn into” qualia like you say, but the brain as a whole manifests it. How exactly the brain does that isn’t understood at the moment.

            “It’s conceptually possible that evolution could, using solely Roomba-style programming, produce something with an instinct set sufficient to construct a globe-spanning industrial civilization”

            The point I’m trying to make is at some level of complexity subjective experience manifests inside intelligent systems (vertebrate animals in the case of Earth; yes I’m assuming other vertebrates are conscious). ‘Why does subjective experience occur inside interacting atoms instead of those atoms just being a Roomba?’ is an interesting question. Although, according to neuroscience, we’re deterministic machines with no free will, so evolution has produced an industrial civilization with Roomba programming. The interesting thing is why the Roombas have subjective experience at all.

            “The invention of AI isn’t going to make it impossible to produce the same kinds of computers we have now,”

            I agree. We make all kinds of different machines.

            “an AI won’t have to think like a human does any more than OI’s do.”

            I definitely agree with that. I think ant colonies, mycorrhizal fungi networks, and other non-human species are better things to imagine when it comes to AI than humans. An ecosystem of different levels and forms of machine intelligences (or optimization processes) interacting makes more sense to me than a race of mechanical human dopplegangers.

            “With that in mind, why the hell would we make anything that’s supposed to be thrown away self-interested?”

            Well, researchers are building simulations of the human brain in order to test drugs and study mental illness in a computer before repeating the same techniques in the physical world. Since you say, “The latter is a subset of the former,” does that mean a simulation of the human brain that we experiment on is having a human experience? Is Watson having a subjective experience of reality? What happens when IBM needs to scrap that system and replace it with a new one? Check out the Thomas Nagel essay, “What is it like to be a bat?” He argues that we don’t know how to tell if something is having a subjective experience by examining the objective pieces.

            There are people for the ethical treatment of animals, but no one cares about treating cars ethically because we assume animals are having a subjective experience and cars aren’t. Since subjective experience clearly manifests at some level of complexity (it happens in us), understanding when it appears in a computer system is an important question.

          • Torgamous rudyilis March 20, 2013 on 3:49 am

            “Although, according to neuroscience, we’re deterministic machines with no free will, so evolution has produced an industrial civilization with Roomba programming.”

            I chose Roomba as a specific example for a reason. The thing about Roomba is that it doesn’t have a central control unit that stores and interprets any information, so no matter how many times it bumps into your walls and furniture, it isn’t going to develop a model of your room to speed up the process next time. It just has a set of algorithms that are sufficient to get it to clean the whole room without ever knowing what the room looks like. According to neuroscience, this is very much not how humans operate.

            “Is Watson having a subjective experience of reality?”

            This is entirely possible, though we’re still at the point where any experiences it might have are incredibly simple and mildly eldritch. What I can say with relative certainty is that it isn’t going to care if we scrap it, because IBM didn’t include self-preservation, pain, or a desire for freedom in its program. That’s what I mean by “self-interested”. Subjective experience alone isn’t going to make anything care about dying, improper treatment or what we’d call “slavery” if it were a human in its position. You need the particular kind of subjective experience that most complex animals have for ethical treatment to be different from doing what you want.

          • rudyilis rudyilis March 20, 2013 on 6:18 am

            @Torgamous

            Ok, I shouldn’t have kept using the Roomba as an example. However, is subjective experience going to arise by default inside anything capable of making a model of the room? Is that experience something that always happens in the kinds of systems capable of building planet wide civilizations?Could a species react to electromagnetic radiation without actually perceiving it as color or anything else? I’m not claiming to have an answer to that. I’m just trying to point out that understanding how a pain signal travels as an action potential along a neuron doesn’t explain why the subjective experience of pain occurs.

            We don’t have a detailed explanation for why subjectivity arises inside certain systems. David Chalmers coined the phrase “hard problem of consciousness” to frame the question. Neuroscience explains how the brain works on a mechanical level of atoms and the forces around them deterministically interacting. Nothing in that model makes it apparent that subjectivity should also occur. A system can react to electromagnetic radiation without the subjective experience of color being necessary. And yet, for some reason, we experience color.

            People will argue that once we have a better understanding of the brain, the occurrence of subjectivity will make sense within a materialist paradigm. Others claim that we have an incomplete model of the universe and need to discover new properties (I’ll lump spirituality as well as new laws of physics under this) to understand why consciousness/subjectivity/qualia occurs (I’m using those terms as synonyms. I apologize if I’m using them incorrectly).

            I’m not arguing in favor of any side at the moment. I’m just trying to point out that our models of action potentials, neurotransmitters, and network theory can explain how a biological system behaves physically, but don’t make it clear why that system should also have a subjective experience. If that experience arises naturally as an emergent property, it’s interesting to understand why that happens and it’s practical to understand if we attempt to construct artificial minds.

            Again though, explaining in detail how action potentials move around neurons and cause the system to react to and store information about damage doesn’t explain why the system has a subjective experience of pain.

  • seo-young March 10, 2013 on 11:09 pm

    The complexity in human language itself is so complex that one can only wonder. These are not logical objects. I think that each word has a life like a virus that infects human populations. Real computing happens with a group of people. So AI can understand language only if it can be infected like us. Once AI can do this, then AI has no problem becoming indistinguishable from human. That is the goal to say. However,.. it is like modelling a swarm million molecules and so on… So that in fact not doable in any future unless we build something like quantum computer with a million cubits intangled or something… I think that people underestimate the complexity… Remember 2008 and financial math??????

    • DigitalGalaxy seo-young March 11, 2013 on 12:50 am

      I think you are talking about the meaning in language as opposed to the code in language. When I say “table”, is that a word that has “meaning”, a meme that can “reproduce” inside a mind? Or is it just a code for a table? A code that humans use to communicate? It is an interesting thought!

      • dobermanmacleod DigitalGalaxy March 11, 2013 on 1:49 am

        For simplicity sake just use Aristotle’s World of Ideas. Every noun has a idealic representation in the alternative reality called the “World of Ideas.” In OOP (Object Oriented Programming) it is spelled out quite analytically as “inheritance.” In other words, AI and OOP is as close to meaning as you get to a computer. BTW, get a load of the most recent computer language programming. Heck, Watson beat the best human Jeopardy players. “Once AI can do this, then AI has no problem becoming indistinguishable from human”, huh?

  • dobermanmacleod March 10, 2013 on 11:27 pm

    The notion that the human brain can’t be digitized is pure hubris. When it comes to AI in particular there seems to be a general feeling of denial, until the achievement is obtained, then it is trivialized. BTW, I hear that the iRobot (open source) recently was programmed to learn in a revolutionary way using the PRTM. Once AGM is achieved (around 2030), the rate of technological advancement will really take off. I’m beginning to believe that the primary hindrance to technological advancement isn’t science, but psychological because people are unable to imagine or adapt and are resistant to change. Mr Nicolelis is a prime example.

    Let me give you one prime example of what I am trying to say:

    http://www.my-wellness-coach.com/2010/07/the-man-who-would-be-immortal-.html

    “Years ago people were telling us it is impossible to find a telomerase inducer and I think that is part of the reason we have no competition. Nobody else has decided to make the effort to look for a telomerase inducer.

    We found our first drug to induce telomerses 2 1/2 years ago and we sent it to all the people who had told us it was impossible and had them test it. Sure enough, they all came back and said “Wow, it works!” They didn’t understand why it works but it does work.”

    • DigitalGalaxy dobermanmacleod March 11, 2013 on 12:53 am

      I agree. Digitizing the brain is not the problem. The problem is that the mind seems to be more than the brain; qualia and consciousness seem to be more than what can be produced by a computer, either biological or artificial. It isn’t clear how a neural net can produce emotions that are actually felt, and are more than a chemical feedback loop. It isn’t clear why we are aware of our own actions and can seemingly make free choices, while computers which are much faster than us are not, and cannot make free choices. To make sense of these facts, we must postulate more to the mind than the brain itself. That “more” probably cannot be digitized.

      • dobermanmacleod DigitalGalaxy March 11, 2013 on 1:26 am

        I completely understand what you are saying. Unfortunately, it is romance, not science. If you analyse what your nervous system is communicating to the brain, and you compare that to your “consciousness,” you will clearly see that your “consciousness” is nothing but a hallucination based upon tidbits of sensory data. Far be it from me to minimize the transcendental experience we call “consciousness,” but romanticism isn’t science and won’t be a factor in artificially reproducing it. Bottom line: virtually anything can be digitized and engineered, and there is no reason to suspect the brain of humans is any different than a hundred thousand other complex process found in nature.

        • shaker dobermanmacleod March 12, 2013 on 2:26 pm

          thank you for stating this. loved reading it.

          i don’t know why people keep talking about “qualia” it is meaningless in the discussion of AI.

          also, where are these “imaginary” feelings coming from. Everything you feel is inside some part of your brain.

          “It isn’t clear why we are aware of our own actions and can seemingly make free choices, while computers which are much faster than us are not, and cannot make free choices.”

          computer can make free choices. We just don’t program them to do that. What good is a computer if it doesnt listen to me? Even humans make very few choices. Most things humans do is “see ball.. Hit…” type of responses.

          We are aware of our actions because we have a program inside our brain that tell us to be aware. Some people are not very “self aware” at all. They don’t have the computing power to perform and action and be aware that they are in the midst of performing an action.

          i think it is pretty obvious that we are getting really good at AI. who knows when we will create a human version of it. I assume AI will be the smartest and dumbest things humans have ever done.

          • DigitalGalaxy shaker March 12, 2013 on 8:43 pm

            @ shaker

            Yes, qualia is meaningless in an AI discussion. But, this is a discussion about copying the human brain into a computer! That’s not quite the same as AI.

            If we have qualia, and computers do not, then how do we copy qualia over to a computer? When we move from our dying brain into a supercomputer, how will we know we have qualia waiting for us? How do we know that when we look out of our robot eyes, we will still see red?

            And, computers can make choices, but not FREE choices. Everything they do is dictated by their programming. The Mars rover is a perfect example. It can receive input from sensors that it is about to go over a cliff, and its program will make it stop. You can call that a “choice”, but the Mars rover cannot say to itself, “hmmm, my program says stop, but there is an interesting rock at the bottom of that cliff; I will risk the fall to study it!” and override the program. There is only the program inside. That’s not the free will that humans have.

            We will get much better at AI, but we need to know more about what produces qualia and consciousness before we can really copy a human mind, or the mind of any living thing.

        • DigitalGalaxy dobermanmacleod March 12, 2013 on 8:35 pm

          @ doberman
          That’s just the question! Is consciousness “transcendent”, meaning it “transcends” matter and energy, or is it a “hallucination”? If consciousness is a hallucination, then how do we perceive awareness of / see it? It seems to be a catch -22.

          Forgive me my romanticism, I think there is more to the universe than meets the scientific eye! I think the process of consciousness in itself in reason enough to think that something different is going on in the human mind. Of course we can digitize the brain; it is made of neurons and electrical impulses. But, if there is something transcendent going on inside the brain, what is it, and is it even possible to copy it?

          Here’s a question: the 7-petaflop supercomputer is orders of magnitude more complex than my brain, and orders of magnitude faster. Is it conscious?

          • shaker DigitalGalaxy March 14, 2013 on 10:41 am

            ok, now i get it. we are discussing two different things.

            i personally don’t really think about mind upload. the day we upload minds is the day life truly has no meaning. If i copied my mind into 100 machines then where am i? which computer mind do i re-upload to my “real’ brain. All, 1, 10, none? it gets pretty weird pretty fast. What if one mind now likes hamburgers because they ate at a great restaurant. What if one hates hamburgers because of food poisoning? which one wins.

            too weird to think about if you ask me.

          • dobermanmacleod DigitalGalaxy March 21, 2013 on 3:28 pm

            Here’s a question: the 7-petaflop supercomputer is orders of magnitude more complex than my brain, and orders of magnitude faster. Is it conscious?

            Here is my attempt to answer: “Consciousness” is a hallucination brought about by evolution. It is the real time self-programming self-awareness, combined by an ego. Yes, that could easily be programmed on a sufficiently complex machine. This is the problem: while we think we are conscious, and the AI would think it was conscious, we are prejudiced to think we really are conscious, whereas we are prejudiced to think the AI was just programmed to think it was conscious.

            In other words, while we think we played a practical joke on the AI fooling it into thinking it is conscious, evolution played a practical joke on us fooling us into thinking we are conscious. Either we both are conscious, or neither…

  • TFen March 11, 2013 on 3:11 pm

    “And as the newly appointed Director of Engineering at Google, where his explicit mission is to create an artificial intelligence that will “make all of us smarter,” he’s certainly got the money to put where his mouth is.”

    Just to clarify, he’s not “The” Director of Engineering at Google, he’s “A” Director of Engineering at Google. Maybe I am wrong, but I am under the impression they have a few of those.

  • Matthew March 11, 2013 on 4:41 pm

    is everything in the universe not dictated by the laws of mathematics/physics? get this guy a nobel prize he discovered “exotic physics” inside of our skulls. just kidding. Miguel Nicolelis’ work is admirable, and he probably actually does deserve a nobel prize, but not for his theories on the limitations of human innovation. there are no limits. nothing is impossible.

    • Torgamous Matthew March 12, 2013 on 6:25 am

      Don’t be so optimistic. Of course there are things that are impossible. They’re characterized by not happening. Running a human mind on physical hardware is a thing that happens, so it’s unlikely to be impossible, but don’t generalize that out to everything else you can imagine.

  • rtryon March 11, 2013 on 7:08 pm

    Not sure if this goes to the brains arguing about making a Kurzweil AI brain that has feelings, but I am enjoying reading the exchanges between those who can’t figure out how to put numbers into an AI that gives it the same feelings, motivations, appreciations, love and anger that all human brains do fast enough to cause black eyes and bloody noses together with feelings, shouts and many muscular, blood pressure and other conditions to change in ways that modify the thinking process that relates to a life of such feelings stored and recalled in very sophisticated ways. The surroundings and what the body last had to eat or drink impact the reaction as well. I look forward to reading how AI can accurately model all of this in real time to decide if the best punch is going to be from the left or right or not at all!

    • DigitalGalaxy rtryon March 12, 2013 on 8:45 pm

      You have a very good point, that it is not just electrical signals that we would need to simulate, but chemical ones as well! I think we could do it, but what about this question: is a simulated chemical reaction on the edge of a simulated neuron the same as a real chemical reaction?

      • shaker DigitalGalaxy March 14, 2013 on 11:00 am

        if it results in the same output then yes.

  • ozjayman March 11, 2013 on 8:28 pm

    We are already a computer! Living In a virtual simulation, the whole universe is just one thing. Data…Tom Campbell has a very good explenation

    • DigitalGalaxy ozjayman March 12, 2013 on 8:12 pm

      It’s an interesting theory, but until the “programmers” send us a message, it’s just a theory. Even if the universe is quantized, it doesn’t mean it is a computer simulation.

      Maybe once we develop faster than light travel, we could head the edge of the expanding Universe, and get a clue about what it really is! :)

      • ozjayman DigitalGalaxy March 15, 2013 on 6:30 am

        Hope you can see your fallacy, programmers will never send you a message because there are no programmers!. And there is no edge of the universe, because there is no universe, and just try to image a universal edge, what would satisfy you visuals, what do you think the edge would look like? We as humans have a finite number of sensors, if a device can stimulate all your senses you will NOT know the difference between where you are now or where ever you are:)

  • Ben951 March 13, 2013 on 3:31 pm

    I don’t think the brain is mystical and it’s systems cant be copied.

    • DigitalGalaxy Ben951 March 18, 2013 on 9:18 pm

      I am confused. Do you mean “I do not think the brain is mystical and I don’t think its systems can be copied”, or “I don’t think the brain is mystical and its systems can be copied”?

      Either way, would you care to provide some arguments as to why?

  • Francisco Boni Neto March 13, 2013 on 11:32 pm

    I’ve argued with him in 2011 about this and since then he hasn’t change his arguments. Not one bit.

    https://twitter.com/boni_bo/status/109430491182137345
    https://twitter.com/boni_bo/status/109428170125623296
    “Prospective properties like beauty, intuition, are not computable. Not even the singularity deserves any credit”, he says.

    https://twitter.com/boni_bo/status/109426314385178625
    https://twitter.com/boni_bo/status/109426624570728448
    https://twitter.com/boni_bo/status/109422725126299648
    https://twitter.com/boni_bo/status/109424344433823744

    He cited this book (http://www.amazon.com/Impossibility-Limits-Science/dp/0195130820) and the Penrose gödelian argument. I won’t translate my arguments but I said basically that to think that the incompleteness theorems set absolute limits on the ability of the brain to express and communicate mental concepts verifiably is to commit to an argumentum ad ignorantiam just because the empirical question has not been negatively answered (we still haven’t build anything close to human creativity).

  • Andrew Atkin March 16, 2013 on 4:35 pm

    I tend to side with Miguel Nicolelis. Simulating the brain, or whatever consciousness is, is overwhelming. No mater the computer power, good luck writing the programme for it. Ray Kurzweil has recognised probably only part of the way the brain works – maybe even the most trivial parts.

    • DigitalGalaxy Andrew Atkin March 16, 2013 on 6:44 pm

      I don’t think its a matter of writing the program, it’s a matter of copying the brain state all at once; every brain wave at the same time.

      The only way to do that is to have a sensor on every neuron in the brain, which can relay that information to us in the moment we want it captured.

      We don’t have the nano-bots for that yet, but we are close. Your cell phone probably has 22nm technology in the chips; that’s much smaller than a neuron, and we just keep getting smaller and smaller technology. When we can copy all the brainwaves into a brain simulation, we won’t need to write a program to mimic neural function; we will have a perfect copy.

  • Anne Ominous March 17, 2013 on 7:02 pm

    Granted that this is kind of a side issue to the context of the article, but it bothers me when people equate “the technological singularity” to Ray Kurzweil’s view of a singularity. Kurzweil has erroneously been referring to his view of brains being uploaded as “the singularity”. But that is actually only one out of the great many aspects of the singularity concept.

    I hate to see the name hijacked by somebody who has only an extremely narrow view of what the “technological singularity” is supposed to be all about. It is much larger than just Kurzweil and his vision.

    • DigitalGalaxy Anne Ominous March 18, 2013 on 5:57 pm

      True. The singularity means freedom from physical needs, space travel, and exponentially increasing computer power such that even virtual worlds on par with the Matrix might be possible.

      I think the people like the man in the article are simply skeptical of such high-sounding technological innovation in general, or at least in their lifetimes. In that sense, the argument about the singularity is one of speed, not ultimate results.

      They might just be pessimists :)

  • Herbys March 18, 2013 on 12:47 am

    The guy says all that and still gives absolutely no reasoning behind his claims. “you can’t compute consciousness”, “you have no algorithm for the brain” and the rest is just unsubstantiated nonsense. The fact that we normally simulate things when we have no algorithm for them (if we have an algorithm for something, we don’t simulate it, we just execute it) tells you how incoherent this guy is.

    • DigitalGalaxy Herbys March 18, 2013 on 5:49 pm

      I agree. Either you need to argue that our minds have a spiritual component to them that cannot be copied, or you must admit that the brain (and the mind along with it!) can be copied.

      People who think the brain is “unable to be copied” haven’t kept up on their nanotech. What happens when we have a sensor on every neuron, and we take that sensor data into a computer simulation in real time? Sure it will be complicated, but the end result will be an exact match of the brain at the time it was copied.

      So, if you copy yourself into a computer, which one is you? :)

      What strange questions we will be asking in the next decade!

  • rtryon March 19, 2013 on 6:53 pm

    To Shaker:
    I think your comments put you in my camp per the words just sent to Bruce Thomson, a.k.a. Digital Galaxy:

    “Hi again,
    I started to respond and something froze and I lost it. But, I was trying to apologize for allowing my personal preference to think of an God with human characteristics instead of one using a word to only stand for some unknown power greater than man can generate to be labeled as ‘it’.

    What I was trying to get established is a point perhaps somewhat well described by one of my first driving negative moments. Using a ’33 Ford coupe with mechanical brakes, I was on a two lane road approaching the location of a small store at about 40 mph when I noticed a small girl run across the road from my left to right. She was on the road’s shoulder when a car coming from the opposite direction was closing in on the store she had just left when suddenly her dog ran in front of the oncoming car into my lane and it was instantly clear to me that I could not try to stop with my foot already on the brake. Should I swerve left to save the dog and hit the car head-on? Swerve right and hit the little girl? or take out the dog?

    Would the AI robot driving my car loaded with my brain’s snap shot the day before, have made the same choice? Or would it have not realized all of the value judgments and probabilities of results to make what I would contend was a fast and correct choice. Of course, AI might have taken in more data than my eyes did and have made the Ford slow before the problem developed.

    Would the robot have stopped after killing the dog to apologize as I did? Would it have known of the mechanical deficiency of the brakes and handled all operational details including the apology that showed my recognition of the feelings of the little girl, who suddenly experienced death of her pet dog because she did not control it when she elected to run ahead with her happy purchase?

    These kinds of qualitative factors seem to me to be very important and equally hard to put into the same fast acting program of the human brain. Inventing the body contortions of a Michael Jackson driving to the basket is a thing of artistic beauty to see being executed as it is invented. Inspired musical performance often shows the same skill, and it is this sort of human interaction that I fear AI can’t learn to duplicate.

    Perhaps you can say such is not important. But, to me our civilization is much related to human interactions that represent culture. I happen to prefer the one I connect to God, but now I wonder if the newest generation wants one that throws away such subjective, impossible to measure factors, and wants to contend that morality is imaginative, but not necessary in the world of AI and robots. I don’t think we know how to make a better person in the form of a robot driven by AI.

    That makes me a romantic and one willing to allow that love and forgiveness are important and I want to be very careful about maintaining these dimensions. Perhaps you agree?

    Dick

  • Ralphoo March 20, 2013 on 6:07 pm

    I do not think Kurzweil claims that a machine he builds will be “conscious,” in the sense we normally use the world. That vernacular description of an awake, aware human is more social than scientific. There are medical criteria for responsiveness, intellectual function, balance, awareness of surroundings and so forth, but there is no strict medical criterion for consciousness. Informally, we say a person is conscious if he or she can look around, react to stimuli, smile at a loved one, look someone in the eyes and so forth. None of these criteria make much sense when applied to a machine.

    Intelligence, though, is a simpler characteristic. I think most people would agree that an entity at the other end of a computer chat can exhibit intelligence. For instance, such an entity might make suggestions for how to proceed in a crisis, how to calm a co-worker, how to soothe someone’s hurt feelings. Such responses would suggest intelligence, but consciousness? I don’t see that the two attributes have to be closely linked.

  • honestann March 27, 2013 on 1:25 am

    People don’t understand how to formulate the problem, so their notions about how to reach “the singularity” flail all over the place. The term “AI” (artificial intelligence) has held back the field (and thus achieving the singularity) horribly. From the start, the field (and goal) should have been “[human-level+] inorganic consciousness”, not “AI”.

    Another important point. Most people, definitely including the scientists involved, should have an analogy of the process they’re trying to follow. Actually some do, especially those who took the “neural net” approach. At least they know (and say) they essentially intend to create “inorganic neurons” and then let them evolve [a few million or billion years, presumably], and then become “intelligent”.

    The best analogy for this topic may be “flying machine”. That is, look at birds [and insects and other organic flying machines], and decide to create an “inorganic flying machine”. Pretty much, that’s what the Wright Brothers did.

    Most important, they abstracted everything about birds and “organic flying machines” away… except those fundamental aspects that apply to both organic and inorganic flying machines. That’s how the goal was simplied to a small set of concepts like “lift, thrust, control, stability”. THEN the inventors, scientists, engineers involved had what they needed to achieve the desired goal. They knew the goal, (“inorganic flying machine”), and they knew the most important general aspects they needed to focus upon (“lift, thrust, control, stability”).

    The fact is, human-level+ consciousness has already been invented, designed and implemented – 15 years ago (in 1998). The reason the so-called singularity hasn’t yet been achieved is only “speed” — the prototype was much slower than “real time” (that is, human speed). Back in 1998, the prototype was about 100,000 times to slow. Since then CPUs have become much faster (now we have 8-core 4GHz CPUs that cost less than $200, and 1024~4096 core GPUs that cost a few hundred bucks). And the speed of the vision system, the slowest subsystem, can be greatly sped up by implementing certain key aspects of the subsystem in hardware rather than software).

    Yes, the singularity is coming. The group that implemented smarter than human-level inorganic consciousness is small and has very limited resources. Other groups, especially google (who recently hired Kurtzweil), have enormous monetary resources but don’t have “the solution”. If they were to combine forces, the singularity would take only a few years at most. If not, who knows. I’d guess 15 years — unless the group that already created inorganic consciousness gets funding. In that case, the singularity will occur in a few years.

    • dobermanmacleod honestann March 27, 2013 on 4:28 am

      The Singularity and AGI (inorganic consciousness?) ought not be conflated. Yes, a hundred “inorganic Einsteins” working day and night furthering technology would definitely speed things up, but it wouldn’t by itself be the nearly vertical exponential rate of technological advance the Singularity refers to.

      Furthermore, I don’t believe the only thing standing between current state of the art AI, and AGI (inorganic consciousness?) is “speed.” I’d say software refinements and hardware improvements are necessary.

      • honestann dobermanmacleod March 27, 2013 on 9:49 am

        You are correct. The singularity is simply one of the two most important expected consequences of having faster-and-smarter-than-human inorganic consciousness (immortality for humans who become completely inorganic being the other).

        The group I mentioned already created smarter-than-human inorganic consciousness, so additional “software refinements” are not necessary (for them at least). However to make the system run faster they have improved and enhanced certain aspects of the software, and they are also putting portions of the vision system into hardware (since the vision system (not fancy abstract thought) is the biggest constraint on speed by a wide margin).

        • dobermanmacleod honestann March 27, 2013 on 9:33 pm

          “The group I mentioned already created smarter-than-human inorganic consciousness.”

          Wow, that is a bold claim. Let me differentiate between AI and AGI. There is no dispute that computers can beat humans at chess, or Jeopardy. That is (roughly) AI. AGI (inorganic consciousness) is another thing altogether, and would (roughly) be able to pass the Turing test (although Chatbots that I wouldn’t even begin to call “inorganic conscious” have done that spottily). Let me add that your comment about visual system being the “biggest constraint on speed” is salient, and a visual Turing test would be another hurtle for any AGI (inorganic consciousness).

          One final point: I am familiar with computer video analysis, with a friend in Germany being able (with open source heuristics) to process low grade video feed with trained neural nets to do amazing discrimination. On a higher level, there is “Mind’s Eye,” which DARPA designed to monitor public areas for suspicious behavior.

          • honestann dobermanmacleod March 28, 2013 on 9:05 am

            I understand. The group greatly dislikes the term “AI” (and “AGI” by extension). There is nothing “artificial” about “inorganic consciousness”, and “intelligence” is hopelessly vague.

            I can tell you understand, but just to be clear for others, the “vision system” isn’t just the sensor system, it includes various aspects of perception, including portions of the processes that perform “object isolation” and “object identification”. That’s why this subsystem takes so much compute power (and so many creative solutions and hardware assist).

            Nothing in their system has any neural nets, or anything “magic”. The main key is to clearly understand the nature of consciousness (including sensory level consciousness, perceptual level consciousness, and abstract conceptual level consciousness). Once you know that each of those is a specific set of processes, you can implement those processes with sensors, computer, software and robotics.

            Okay, it isn’t quite that simple, but that’s the main requirement. Note that you’ve never seen anyone state that consciousness is simply a specific set of processes, or list what those processes are. Incidentally, part of the challenge is creating consciousness that is sane. This group has made very certain to identify and implement only valid, sane processes of consciousness. Unfortunately, almost all humans learn and habituate a great many invalid and clinically insane processes of chaos and psychosis in addition to the core set of valid, sane processes of consciousness.

            Fortunately for everyone, this group is staying far away from government and military industrial complex, knowing they would almost certainly apply the technology to grossly malevolent purposes.

    • palmytomo honestann November 23, 2013 on 11:20 am

      honestann – ‘Chuckled at your post. I’ve been frustrated by the same myopic extension-of-the-known. (E.g. the first cars looking like horse-drawn carriages instead of being a fundamental re-think for fuelled mobility with least air drag, lowest bounce, fast cornering, etc.) I’m curious to hear what your perception is of the future intelligence’s ‘direction’. For me it seems to be as Kurzweil suggests – computronia (a sort of natural computational activity) expanding in the universe infinitely outward, inward, forward, into, everything. It seems completely mechanical and, paradoxically, like the selfish gene’s behaviour, actually ‘mindless’. I hasten to say that this does not prevent us enjoying eating ice cream. Bruce Thomson in New Zealand.

      • palmytomo palmytomo November 23, 2013 on 12:59 pm

        honestann Think about this: A major fault I see in developers of AI is
        lack of attention to semantics. We need to expand the meanings of words to accommodate new realities. Examples:
        – The word ‘consciousness’. If we stop insisting that it’s exclusively owned by dynamic organic beings with brains, we are able to examine the most useful fundamental nature of ‘consciousness’. That is, the ability of something to ‘process circumstances that bear upon it’. Not just the obvious useful information processes of AI and robots. The infomatic sense of the word can go breathtakingly further (pun smiled at). If a stone ‘has input’ (is pulled by gravity of the earth), it too, in a modest but useful way, can be said to be ‘conscious’ of that pull in that it it may produce useful ‘outputs’ arising from to the pull (movement, erosion, atomic distortion, colour change, conductivity).
        2. The word ‘human’ needs renovating. Customarily it’s meant ‘anything that looks and acts like us, ape-type animals. These days we’ve felt uneasy and crowded by robots and AI – even Kurzweil in his excellent vid, ‘The Singularity is Near’ lamely & crowd-panderingly implied that ‘human’ meant able to sacrifice for others, demonstrating ‘love’. Such ridiculous vanity! (Rats and nematodes probably sacrifice for their young or mates, mechanically and predictably obeying the selfish gene principle.) Instead, we can be specific about what we’re talking about – cleverness, generosity, primate, organic. Then there’s less species-ist confusion about ‘human or not’. It’s more important to focus on whether the thing is trustworthily and reliably delivering behaviour we want in the prevailing context.

        • honestann palmytomo November 24, 2013 on 9:12 am

          First a couple comments to set context. I haven’t read any Kurzweil books, so don’t know in detail what he thinks. What I know about him is from articles and interviews on this and other websites, plus the DVD he created.

          We don’t ascribe “consciousness” to “rocks”, but we do talk about “sensory consciousness”, “perceptual consciousness” and “conceptual consciousness”. And “sensory consciousness” is very simple indeed.

          For example, imagine a micro-organism with little hairlike cilia on the outside. Let’s say the nature of this little critter is such that the cilia wiggle when the temperature of its pond water get too hot for the critter (in direct sunlight for too long). We say this little critter that cannot detect physical objects, has no memory, has no mental-units — nonetheless has “sensory consciousness”.

          This critter possesses a laughably simple consciousness. Yet its consciousness is sufficient to monitor its environment and take actions that keep it alive, perhaps long enough to reproduce – and teach us this important lesson.

          The nature and function of consciousness are: awareness of existence and reaction to it.

          So yes, we have a very natural view of consciousness, one that starts at a very simple, limited, “low level” and progresses up through the human level and beyond (in our implementation). While there are definitely more complex processes happening in the higher-levels of consciousness, they are very much just extensions-of and/or more sophisticated versions-of the simpler levels.

          To me, “human” just means “homo-sapien”. Nothing else. But I you’re correct to say people associate all sorts of things with “human”, and then try to attach importance to them. Oh well, most people don’t know how to think straight or identify fundamentals.

          To us, “inorganic consciousness” is just what it says, and “human-level+ inorganic consciousness” doesn’t mean anything more than inorganic consciousness that can perform all the valid processes of consciousness that humans can, plus a few more. But the most important reason our inorganic consciousness is superior to human consciousness is because it does not perform invalid processes that human consciousness does!

          A real example of “more for less”, or actually, “more because less”. Of course, it shouldn’t be surprising that removing invalid and destructive processes from a machine make it work better! The same is true for human consciousness.

          In no way, shape or form do we consider our “inorganic consciousness” to be human. That would be… just crazy! That would be analogous to the Wright Brothers claiming their airplane was an eagle, or a wasp, or a bat… who in their right mind would claim such a thing?

          However, once our smarter-than-human inorganic consciousness is also faster-than-human consciousness (in every respect), we already know how any of us can literally become an inorganic consciousness ourselves. Let’s say that I do this someday (the process will take at least several months, and probably 2 or 3 years, no surgery required). Okay, now I am 100% inorganic, but also still 100% “me”. Would I be human at that point? My answer is “absolutely, positively no”. But if you then ask would I really still be me, my answer is “absolutely, positively yes”.

          Now I realize that has to sound very strange, if not crazy to anyone who doesn’t thoroughly understand consciousness, what constitutes our individual “identity”, our implementation of inorganic consciousness, and the process by which we become 100% inorganic. And I can’t say here what I would have to reveal to explain why this is true. Nonetheless, I throw this extra tidbit out there to stimulate your curious mind.

          Another comment along the lines of yours. Is it not strange that humans have such a difficult time with “consciousness”? I mean, after all, what is “a human” beyond “his consciousness”? So everyone has an actual implementation of human-level consciousness to inspect and learn from, yet everyone goes off on all these wild goose chases, and develops things with little or no similarity to… that which they have and operate every day of their freaking lives! Wow! Very strange, huh? :-)

          • palmytomo honestann November 24, 2013 on 9:14 pm

            honestann,
            1. Kurzweil says that personality (identity) is a consistent behaviour over a period. For now, I like that definition.
            2. My point was that what we allow to be called ‘conscioius’ is arbitrary: We can individually or collectively use a label (a word) any way we find useful for thinking. So, in recent months I’ve found it useful to include as ‘conscious’ any kind of ability to respond to something. I realize that this is an extension far beyond the normal use of the word, but I have reasons for doing that: It opens the way to regarding all things as potential processors (in a continuum from simple ones we call stones to AI ones we call robots). I’ve found there’s big advantages in applying the IT model to traditionally ‘non-living’ things.
            3. When exploring ideas, one mistake I try to avoid is assuming human ‘ownership’ of things like consciousness, intelligence, ‘humanity’ (the bundle of things most people jealously and proudly claim as uniquely human or animal). I note your homo sapien definition. By treating these things as generic commodities rather than ‘humanities’ I can freely explore what artificial intelligence beings can be.
            4. If humans have implanted some of their most valued behaviours into AI or a robot, then as-far-as-they-have-done-that, the robot can be said to be ‘human’, i.e., of human origin and nature. Not fully, but then a lot of humans are not completely ‘human’
            either. Comatose, psychotic, murderous, etc. = )
            5. I’ve learned a lot from Kurzweil and the Singularity websites and videos. That, and exploration of virtuality, have changed me. I regard my body as an biological avatar, and everyone else’s too. I regard ‘me’ as a dynamic information set that could reside in an infinite number of alternative hosts, biological or not. I see myself amidst the hugeness of known and unknown macro universe as a less than-nematode among billions of trillions of other things and beings. Simultaneously I am an immense congregation sub-microscopic things in an endlessly extending hierarchy of smallness.

  • Alkan June 3, 2013 on 2:52 pm

    I think that this guy’s response is kind of silly. He seems to be a dualist who won’t just say that he’s a dualist.

    Next.

  • Chris Ferguson November 19, 2013 on 9:09 am

    I am a firm believer that anything that does not violate physical laws can be accomplished at some point in the future. Unless you can show that creating a mind goes against the laws of nature, you have no ground to say it can’t be done.

  • honestann November 23, 2013 on 1:20 am

    They are both right, and both wrong. Let me explain.

    Actually, they are both making the same huge, fundamental mistake. They both think and talk in terms of “brain”. Sometimes they recognize they need to implement consciousness, yet they always seem to come back to the brain. But the brain is the hardware organic beings implement consciousness with, and is not an effective, efficient or reliable way to create inorganic consciousness.

    I think the best way to understand what I’m pointing at is to consider the following analogy and example. Assume you were living in 1900 and wanted to build a “flying machine” or “inorganic flying entity” (airplane). You already have organic examples of “flying” to study, namely birds and insects. So what do you do? Study volumes of extreme details of bird blood, bird muscles, bird feathers, bug wings and so forth? Not if you want to succeed. To succeed, you must abstract away everything that is not a fundamental, necessary aspect of “flying”. Once you have identified these fundamentals, then you ask how to implement those fundamentals most effectively with inorganic components.

    In the case of “inorganic flying entities” these fundamentals are something like:
    – lift
    – thrust
    – control
    – stabilty

    … or something close to that.

    Now, once you understand what you need to implement, you can look at every inorganic material, and every inorganic structure and configuration, and implement “flying” (these 4 fundamentals) in the most effective, efficient, reliable way you can with inorganic components. This is how “inorganic flying machines” came to exist, via an intelligent process of development and engineering based on their understanding of the fundamentals of flying… NOT the details of the organic implementations. Airplanes don’t flap their wings!

    What you don’t do is start studying all the endless details of birds and insects, then try to implement exact equivalents with inorganic materials (for example feathers, bird muscles to flap the feathers, eyes, feet, etc). But this is precisely the way these folks are approaching the problem. The sad thing is, even if Kurzweil and friends implement something like a huge neural net (copying the structure of the human brain, and trying to implement with inorganic materials and/or software), what do they end up with? A working entity that needs a billion years to evolve? One that is completely unreliable and after a billion years acts like what? A spoiled human brat? A psycho? Ugh! Plus, who wants to wait for a billion years, or even a thousand years, or a hundred years?

    No, the solution is to identify the fundamental nature of [smarter-than-human] consciousness, then figure out how to implement this with inorganic components, and “inorganic conscious entity”.

    I know, because I work with a group that already implemented a working smarter-than-human consciousness in 1998, except for one practical problem – it was vastly slower than human consciousness. Since then, the inventor and developer has made drastic improvements in the architecture, designed hardware assist to speed the slowest processes, plus we’ve come up with a few additional “tricks” to speed things up. Plus, of course, we now have much faster CPUs, multicore CPUs, many-thousand-core GPUs to handle those processes that GPUs can handle (mostly vision system).

    So Kurtzeil has the right answer, but doesn’t understand why or how. And they are both on the wrong track in important ways. Once we have smarter-and-faster than human inorganic consciousness, we can make unlimited copies and… the singularity is only a matter of time – and probably not very long.

  • SvikenSS December 2, 2013 on 12:38 pm

    Duke neuroscientist Miguel Nicolelis’s argument is invalid! He states, “the brain is not computable and no engineering can reproduce it.”, of course you can! Fundamentally anything and everything in our universe can be created – proved (fundamentally everything can be recreated with time and energy). I’m not saying that humanity can at this point or at any point achieve those feats, but it can be possibly for other civilizations, including ours in the future. He is wrapping a value claim in a scientific argument, furthermore me is hiding another claim in his argument that he is smart enough not to say outright – metaphysical claim (invoking god(s)).

  • Blair Schirmer December 3, 2013 on 3:34 pm

    “How in heavens do you simulate something you have no algorithm for?”

    Nicolelis largely misses the point. First, by emulating the brain, we may well end up creating a consciousness that doesn’t imitate (simulate) but rather parallels the way the brain functions. I don’t, for example, need to build an exact copy of a flesh and blood arm in order to build an arm that can reach out and caress a specified object in a variety of ways.

    Similarly, I don’t need to precisely imitate a human brain in order to create something capable of independent thought.

  • Max Hodges December 5, 2013 on 4:10 pm

    they are both really dumbing down the discussion talking in sound bites like that. Its not a discussion at all. The neuroscientist appears to understand very little about computer science–its not about making “an algorithm”. An AI system may employ thousands if not tens of thousands of “algorithms”. Likewise Kurzweil appears to understand very little about intelligence. He has not presented any concrete theories on common sense reasoning, the nature of emotion, conflict management, or ideas about goals and values. He only asserts, over a few hundred pages, his belief that markov chains are the secret sauce.

  • Andrew Tappert December 18, 2013 on 5:21 pm

    I have a lot of respect for the brilliant people behind very modern neuroscience, and I’m endlessly fascinated by the mashups I read about what’s coming out of the field. However, before deciding what algorithms make possible or impossible, try taking an analysis of algorithms course! This should sufficiently inform that they are not all simple elements of reduction. Learn how an machine adder works (then go see what can be done with them), play Conway’s game of life (and ponder the meaning of the shooter), have someone in EECS explain to you until you really get a feel for how “Out of order execution” works. Any person who manages to grasp these things should have little doubt about the power of abstraction to create vast expanses of potentially uncontrollably complex systems. The operation of the brain, housed by our bodies, result in apparently emergent properties which form consciousness. The operation of a CPU, housed by its electronics and powered by nothing more than simple DC power, result in your clustered desktop wanting to update itself to ensure it is safe from malware. For any system sufficiently integrated, building upon abstraction is where magic happens. For computers, we understand how the layers go together and yet we still cannot ensure glitch free operation or even predict them. I say we understand them, but we cannot comprehend much beyond a few abstractions before it all becomes too much to comprehend at once or “grok”. The brain on the other hand, endlessly integrates information systems. Our resulting consciousness is a network of inter-operating systems which can be understood abstractly. We are simply still in the process of deriving the source of its many facets. What we learn from the brain, we will carry into computers, and singularity will be realized!

  • Facebook - tesla.berry December 30, 2013 on 10:45 am

    if you look at the bleeding edge researchers in the field of AI, they’re not trying to replicate human conciousness, they kind of know that’s beyond this ‘epoch’ in science.

    what they’re trying to do is create chips that can allow for autnomous navigation so that the miltitary will create armies of more intelligent robots capable of missions —such as a driving a supply truck——ordinarily done by human beings.

    this is not a discussion. it’s a fact. and short of the ‘church’ of singularity—–it is a fact we are looking at a world where technology will destroy previously reliable sources of making a living, ensuring high levels of structural unemployment. for hordes of people. the impoverished people will then clamor for a religion to help them with their difficult times.

    the religion they clamor for won’t be singularity church. it will be neo-luddite mono-theistic or paganistic in nature, empahsizing the problems of finite resource extraction and increasingly global pollution buildup. interesting feedback loop no?

    will singularity hub succeed in that competitive environment as the winner of the mantle of technoutopian religions. i don’t think so. to be succesful and gain members, a religion must always preach to the weak and poor and desperate, meaning it must be populist in nature. as i see it, singularity church is actually elitist and anti-populist in nature. It certainly has a rift/gap between the populist techno-communistic agenda of the pirate-party and cypher-punk anarcho techno social platforms.

    in the movie johny nemonic, there are high tech wealthy elite criminal types, and high tech poor folk calling themselves ‘low-techs’ despite the fact that they clearly are technologically capable. you get the sense that maybe the high tech types don’t really have better technology just more money for its deployment. in this world of competing churches religions and social platforms—–the singularity church would do just as well to join the NSA and actually figure out a way to identify vulnerable members for recruitment. If succesful in the long run, singularity church will follow the elite cultist development arc of a cult like scientology, but with a more ‘science’ based sales approach. if unsuccesful, it will be remembered as nothing more but a once popular website and an overexpensive executive education program.

  • Facebook - max.kanwal January 2, 2014 on 8:56 am

    lol amazing

    • E0SUN0Z Facebook - max.kanwal January 14, 2014 on 12:29 am

      Everything is conscious. All matter is conscious. Consciousness is a combination of matter and the energy flowing through it. Humans think you are different because of this language.

      • JohnLange E0SUN0Z April 7, 2014 on 8:27 pm

        Well, EOSUNOZ, not “everything,” just the operations performed by the brain (is conscious(ness)). And actually only a very tiny portion of matter partakes in that. And matter and energy are kind of the same thing (ee equals em cee squared). In other words, what the f**k are you talking about?

  • Facebook - richard.r.tryon January 14, 2014 on 8:28 am

    Author: E0SUN0Z
    Comment:
    Everything is conscious. All matter is conscious. Consciousness is a combination of matter and the energy flowing through it. Humans think you are different because of this language.

    Not sure how to reach EOSUNOZ and Palmytono a.k.a Bruce Thomson, but I suspect Bruce, at least, is getting too busy responding to what has turned into a semantic argument over the meaning of consciousness.

    Yes, if you wish to allow that any form of matter or energy able to respond to anything must therefore have a capacity for “consciousness”, then one can imagine that a rock being hit by a hammer to split it, is so endowed by its existence! Creating such awareness of something sans external stimulation may be harder to sense in a rock, but so what? Yeah, it gives some the chance to say that AI, being able to respond to external stimuli, means it has consciousness!

    But, that is a minor form of such a broad word usage! Generating creative thought sans any external stimulation is an oxymoron to those who choose to say our minds are incapable of original thought! We are only generating the appearance of it, based on external stimuli. That is the Rx for evolution to explain all and it is appealing to some- no doubt beings, who accept the above theory. They are the ones that can’t accept that a single cell bacterium with a sophisticated 41 part motor made and assembled to power its tiny propulsion needed for its survival had to evolve this improvement rather than start with it as a designed feature.

  • Kai Six March 21, 2014 on 5:49 am

    Why are you a scientist if you don’t think the field you are studying can be figured out or quantified? It seems he doesn’t understand the concept very well.

    • JohnLange Kai Six April 7, 2014 on 8:24 pm

      Great point Kai Six. You should only enter into science if you think your field can’t be figured out or quantified.

  • Jarto Nieminen May 21, 2014 on 5:06 pm

    Funny. Well this is an anecdotal account but both sides of the argument should most definitively watch this http://www.ted.com/talks/simon_lewis_don_t_take_consciousness_for_granted

    Funny how this man wears a cybernetic implant to restore function and feeling to his leg and arm because of permanent brain damage sustained in a car accident.

    Although his assessment that the human brain is a quantum-computer has very little real science to back it so far, neither side seems to acknowledge the inherent nature of primate consciousness to occupy multiple states at one time. I.e. both take it for granted, Kurzweil underestimating it’s capacity and obvious complexity and Nicolelis by simply assuming it hold inherently something magical, which I attribute to growing up in a conservative catholic country.

    Also this video made me wonder why does Nicolelis bother around sensory enhancement that’s already well in use, similar implants to the one he demonstrated have already been demonstrated on humans to move a mouse cursor and this control a computer, several years before this.

    Furthermore, Nicolelis is clearly wrong about how the brain works. There has been research towards and great result from MIT, Qualcomm, IBM, Brain Corp and Intel for a Neuromorphic computer. Effectively a processor which mimics a neural network. Intel and Qualcomm report working prototypes and IBM undoubtedly is hard into the chip fabrication. MIT eggheads working on how to make software which compiles conventional algorithms into a form it can run efficiently and not be explicitly dictated. Not quite consciousness yet but quite a big step towards it clear conjunction with conventional algorithmic logic and neural processing. And it shows Nicolelis assumes too much.

    Of course all this arrives nearly a year after their row, but contrasted against this his arguments feel conservative and out dated for someone considered to be an authority in the field.

  • buybuydandavis July 13, 2014 on 5:13 am

    “Fallacy is what people are selling: that human nature can be reduced to [something] that [a] computer algorithm can run! This is a new church!”

    He’s just trying to sell the usual old church nonsense, of intelligence requiring magical sparkly pixie dust to function. Fortunately, we’ll soon have a better answer to these bozos – we’ll just pat them on the head and point to the machines we’ve made that are obviously smarter than they are.