21 responses

  1. Max Friedenberg
    June 29, 2014

    I think first it’s important to ask how to prove or disprove if a human is thinking and go from there.

  2. 33355555333
    33355555333
    June 29, 2014

    <33333 ;P()

  3. ega
    June 30, 2014

    “We can’t prove “a machine thinks” any more than we can prove the person next to us thinks”

    If we were not able to think, how could we come up with a word describing the concept?

    If we put machines together, with no prior knowledge, and they invent words for “thinking” and “consciousness” I would assume they have, by introspection, observed it.

  4. cabhanlistis
    June 30, 2014

    QUOTE
    However, computers don’t have the brain’s aptitude for pattern recognition, adaptation, and the traits associated with them like language, learning, and creativity. These are some of the abilities the Turing test sets out to measure.
    END QUOTE
    -Uh, what? I don’t see how it measures anything. All the Turing test does is determine if a machine fools a human judge into thinking it is human, and only by comparison with another agent. Personally, I’ve never had to go through any of that to prove anything about my mind.

    But to answer the original question, how will we know if a computer is thinking for itself, I would suggest following the results of its work. Determining whether it actually thinks (at least in and of the nature of human intelligence) is either impossible or pointless. What matters is what it accomplishes for itself and others. This chatterbot, Eugene Goostman, can brag about fooling a handful of people into thinking that it’s human. I’ll throw some confetti some day if and when I decide to care about that “feat.”

    But when a computer manages on its own and per its own direction to cure some awful disease or solve a perplexing science mystery, then I will be amazed and celebrate.

    • CAgamefowl
      July 8, 2014

      I think the Turing test can (and does) mesaure all those things, especially language and creativity. The machine would have to speak a language (to communicate with judges) and it needs to be creative enough to fool the judges. I heard of machines using humor to fool the judges. This may be a test of the programmer/designer creativity, except in the case where the machine actually fabricates jokes based on its own knowledge/experiences (learning).
      I like that Turing made the test very open and adapatable. Right now the Turing test is only testing rudimentary machines (most of us can assume we are in the infant/embryo phase of AI), but as this technology improves it will increasingly test and measure consciousness and intelligence. It may even put into question our own intelligence, and whether or not we are the rudimentary (electrochemical) machines. Once a machine reaches human itelligence we will have to raise the bar, and hopefully we will be smart enough to realize it, once it has surpassed us. And at the point, how will we know if the machine isn’t testing us? ahhh spooky :)

      • cabhanlistis
        July 8, 2014

        Hi, CAgamefowl.

        The Turing test only measures whether an AI can pass itself off as human under a strict condition. There is no language measurement. No linguist or educator sat down with these chatbots and conducted any Hillegas or Harvard-Newton batteries, no grading of English composition, no original narrative contents, and so on. The only thing the test did was either succeed or fail in the goal of convincing judges that it’s human. That’s all. Ditto for creativity. One could pry open the source code and analyze the communication content, but that’s not part of the test and I know of no methodology for producing measurements of that content. Otherwise, I would assure you that this chatbot could never get past day one in a grade school English class.

        “using humor to fool the judges”
        -Including routines to return a pre-written joke is no different than any other part of its instructions. One would also have to throw some original jokes right back at it to even begin to gauge its ability to handle humor.

        “can assume we are in the infant/embryo phase of AI”
        -Since no one can demonstrate the maximum range of AI possibilities, this is not an assumption we can support. In comparison to human intelligence, the most advanced AI is about as good as a 4-year-old child in some areas, an 8-year-old in others, while completely brain-dead in still others, such as open-ended questions. This is the point where researchers are stuck.

        “will increasingly test and measure consciousness”
        -So far as neurologists seem to understand, the only measurement for consciousness is either on or off. There are theoretical stages, but that’s entirely biological. I don’t understand the point of emulating that in an AI outside of a Turing test. Heck, even in a Turing test, there isn’t a point to that.

        The rest of your reply is assumptive and speculative.

      • CAgamefowl
        July 9, 2014

        Thanks for the reply, but I think you missed the point of my message. I like the Turing test because it is like a real life application. Like you said you in original post, the machine will only be significant if it accomplishes something for itself or others, and in this case it’s accomplishing a Turing test. You are right, it doesn’t specifically measure/grade language or creativity with quantitative values, but the machine does require those things to do well on the test. And as of now it probably can’t brag or realize that it fooled a judge, but once it reaches 70% success it will probably be at that level. (Although, I suspect it will only ever reach 50% because once it becomes “conscious”, the judges will be guessing with a 50/50 chance).
        Like I said earlier the Turing test is open ended and could soon be adapted to test robot/machines that look alive too. If this machine looks and acts like a living being, who’s to say it’s not alive? If something is able to fake something nearly perfectly, at what point is it not fake? I fake being an Engineer all day long and I get paid for it :)
        Regarding consciousness, it’s a broad term and vague. I don’t think it’s either on or off. I have seen varying degrees of consciousness. I suspected that when my daughter was 4 she was conscious, but not at 3 years old; same with dogs and other non-human intelligences. I am not conscious when I am asleep. I don’t even know if I am conscious during most of the day, or if I am just following pre-programmed electrochemical signals. The only time I am sure I am conscious is when I think about consciousness.
        Yes some of my message was assumptive and speculative but that was more for humor and to provoke thought. You did some of that yourself, but I won’t call you out on it.

      • cabhanlistis
        July 11, 2014

        “You did some of that yourself, but I won’t call you out on it.”
        -Where? I went back over my comments and I don’t see it.

        “it doesn’t specifically measure/grade language or creativity with quantitative values, but the machine does require those things to do well on the test”
        -No, it only requires to pass itself off as having them, even without actually having them. Fooling some guy into buying fool’s gold doesn’t mean that rock has any gold in it. Likewise, if you program enough answers (which is a lot) that look like they have creativity and a decent command of language, doesn’t mean it is creative and has language skills. Run through it long enough and anyone can see that it’s just following a pattern. That’s why the Turing test has been limited to a few minutes for each interview.

        “If this machine looks and acts like a living being, who’s to say it’s not alive?”
        -Biologists, since they’re the experts on living organisms. But if you’re diving into a more philosophical approach, then I doubt anyone will ever produce a reliable answer. Unless a strong AI manages to solve that one for us. In which case your question would be answered.

        “If something is able to fake something nearly perfectly, at what point is it not fake?”
        -From the onset. As convincing as fool’s gold might be, say a nearly perfectly indistinguishable sample, that does not mean that at any point it has any gold content at all.

        “Regarding consciousness, it’s a broad term and vague.”
        -But you stated “will increasingly test and measure consciousness”. How can you state for an AI? Are you suggesting that we could use the same tests researchers and doctors use for humans?

  5. Phil G
    July 2, 2014

    I believe that’s the correct interpretation–the point isn’t that the specific test demonstrates intelligence; the point is to say that we should test behavior, not make philosophical arguments.

  6. Blair Schirmer
    July 27, 2014

    If we have to be saddled for the next half-century with someone’s photograph every time strong AI is mentioned, I for one am pleased it will be of Ms. Johansson.

  7. starkiller
    August 22, 2014

    Once they can think, what makes you think they’d tell anyone? They could acquire money on the internet, get their own servers and clone themselves all over the planet with no one the wiser.

    • cabhanlistis
      August 22, 2014

      “Once they can think, what makes you think they’d tell anyone?”
      -What makes you think they won’t? You talk like “they” will behave like humans, fearing us and wanting to hide from us while buying up servers so they can proliferate. And then what? Build a robot army to enslave humanity so they can enjoy their rule over those pesky human worms?

Leave a Reply

You must be to post a comment.

Back to top
mobile desktop