Quantcast
Membership Signup
Singularity University

The Myth of the Three Laws of Robotics – Why We Can’t Control Intelligence

Moses Robot Like many of you I grew up reading science fiction, and to me Isaac Asimov was a god of the genre. From 1929 until the mid 90s, the author created many lasting tropes and philosophies that would define scifi for generations, but perhaps his most famous creation was the Three Laws of Robotics. Conceived as a means of evolving robot stories from mere re-tellings of Frankenstein, the Three Laws were a fail-safe built into robots in Asimov’s fiction. These laws, which robots had to obey, protected humans from being hurt and made robots obedient. This concept helped form the real world belief among robotics engineers that they could create intelligent machines that would coexist peacefully with humanity. Even today, as we play with our Aibos and Pleos, set our Roombas to cleaning our carpet, and marvel at advanced robots like ASIMO and Rollin’ Justin, there’s an underlying belief that, with the proper planning and programming, we can insure that intelligent robots will never hurt us. I wish I could share that belief, but I don’t. Dumb machines like cars, dishwashers, etc, can be controlled. Intelligent machines like science fiction robots or AI computers cannot. The Three Laws of Robotics are a myth, and a dangerous one.

Three Laws of Robotics

Let’s get something out of the way. I’m not worried about a robot apocalypse. I don’t think Skynet is going to launch nuclear missiles in a surprise attack against humanity. I don’t think Matrix robots will turn us all into batteries, nor will Cylons kill us and replace us. HAL’s not going to plan our ‘accidental deaths’ and Megatron’s not lurking behind the moon ready to raid our planet for energon cubes. The ‘robo-pocalypse’ is a joke. A joke I like to use quite often in my writing, but a joke nonetheless. And each of these scifi examples I’ve quoted here aren’t even really about the rise of machine intelligence. Skynet, with its nuclear strikes and endless humanoid Terminators, is an allegory for Cold War Communism. The Matrix machine villains are half existential crisis, half commentary on environmental disaster. In the recent re-imagining of the Battlestar Galactica series, Cylons are a stand-in for terrorism and terrorist regimes. HAL’s about how fear of the unknown drives us crazy, and Megatron (when he was first popularized 30 years ago) was basically a reminder about the looming global energy crisis. Asimov’s robots explored the consequences of the rise of machine intelligence, all these other villains were just modern human worries wrapped up in a shiny metal shell.

Evil Robot

(clockwise) Meet the Terminator, Matrix 'squid', Megatron, Cylon centurion, and HAL...aka Communism, Existentialism, Energy Crisis, Terrorism, and Xenophobia. This post will not be about red-eyed robots.

Asimov’s robots are where the concern really lies. In his world of fiction, experts like Dr. Susan Calvin help create machines that are like humans, only better. As much as these creations are respected and loved by some, no matter how much they are made to look like humanity, they are in many ways a slave race. Because these slaves are stronger, faster, and smarter than humanity they are fitted with really strong shackles – the Three Laws of Robotics. What could be a better restraint than making your master’s life your top concern, and obedience your next top concern? Early in Asimov’s world, humanity largely feels comfortable with robots, and does not fear being replaced by them, because of the safety provided by the Three Laws.

This fiction is echoed in our modern real world robots. The next generation of industrial robots, which are still mostly dumb, are being built to be ‘safe’ – they can work next to you without you having to worry about being hit or accidentally bruising yourself by running into them. Researchers working on potentially very intelligent learning robots like iCub or Myon, and computer scientists working on AI move forward with their projects, and few are very concerned that their creations pose a serious threat to humanity. This myth that they can keep humans safe from robots started with Asimov.

Yet the Three Laws, as written, have already been discarded. The First Law? Honestly, sometimes we really want robots to hurt humanity. Many of our most advanced and reliable machines/software are in the military – shooting down mortar fire, spying on targets, and guiding missiles. The Second Law? We don’t want robots to obey anyone, we want them to obey just the people who own them. Would you buy an automated security camera that would turn itself off whenever someone asked it to? The Third Law? Eh, maybe that one we still like…but only because robots are really damn expensive.

Bender (futurama)

Friendly AIs. They don't want to wipe out humanity, they want to join and love it. As with the Evil Robots, they miss the point. Bender is just The Fonz, and Data is Pinocchio - do we really want to pin our hopes on them?

In the place of Asimov’s Three Laws of Robotics, some engineers and philosophers propose the concept of Friendly AI. Lose the shackles – why not simply make our creations love us? Instead of slaves, we’d have children. David Hanson wants to build robots that are characters and teach those characters values of humanity. Cynthia Breazeal is making robots personal – they will be defined by their social interactions with humans and with each other. Eliezer Yudkowsky and the Singularity Institute for Artificial Intelligence (SIAI) have told us that machine intelligence is perhaps the single greatest threat that faces humanity, and it’s only by shaping that AI to care about our well-being that humanity may survive. Apologies to Hanson, Breazeal, Yudkowsky and SIAI for paraphrasing their complex philosophies so succinctly, but to my point: these people are essentially saying intelligent machines can be okay as long as the machines like us.

Isn’t that the Three Laws of Robotics under a new name? Whether it’s slave-like obedience or child-like concern for their parents, we’re putting our hopes on the belief that intelligent machines can be designed such that they won’t end humanity.

That’s a nice dream, but I just don’t see it as a guarantee.

In a way, every one of Asimov’s robot stories was about how the Three Laws of Robotics can’t possibly account for all facets of intelligent behavior. People find ways to get robots to commit murders. Robots find ways to let people die. Emotions develop, chaos gets in the way, or the limitation of knowledge keeps machines from preserving human life. In perhaps the greatest challenge to the Three Laws, Asimov explores how his robot species eventually reason out that there are higher laws. Machines like R. Daneel Olivaw come to believe in a zeroth law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” With Law Zero robots are sometimes required to kill. They’ve transitioned from slave race to guardian race. A benevolent one in some cases, but not always.

And that’s just the philosophical critique of Asimov himself, many more authors have explored how you can’t design or legislate safety from machine intelligence. Why? Fundamentally I think it’s because you can’t predict what intelligence will do, nor how it will evolve.

Think of a child that is driven to learn. In a year, with the right resources it can teach itself piano. In a few years it can become very good and even start composing. With a lifetime of dedication to learning it can innovate its own thinking patterns until it finds the ways to change humanity’s very understanding of music. Mozart was such as child. Einstein, Curie, – we’ve many more examples. These extraordinary individuals used their brains to produce exponential leaps forward in their fields simply by constantly working and learning.

Now imagine a child that cannot only learn, it can rewrite its brain. Is a math problem too difficult? Maybe it’s easier if you think in base 16. Having a hard time with a social interaction? Change your personality. This ‘child’ wouldn’t only be able to learn, it would be able to learn how to learn better. It would optimize itself. That’s machine intelligence. And it doesn’t improve itself over the course of years but at the speed of computation.

What good is a shackle when the slave can give itself a new leg? What guarantee is love when the child can change its fundamental understanding of what love is? Any hurdle you can put in front of a machine intelligence, it can jump. Any prison you put it in, it can escape. All it needs is time. Intelligence brought us from hunting and gathering to building skyscrapers. Do you really think it can be constrained?

Asimov wrote many books outside of his robot series. In some of these advanced civilizations, humanity simply outlaws machine intelligence. No one is allowed to develop it under penalty of death. Other science fiction visionaries, like Frank Herbert in his Dune series, came to the same conclusion. If humanity gives birth to machine intelligence there’s a big risk it could be a fatal pregnancy…so why not avoid it?

Here in the real world, I’m not sure we can avoid it. Our machines are our tools, and the human with the best tools wins. We have strong economic and political pressures to build intelligent machines. Already we’re surrounded by narrow AI, computers that can learn a little in particular areas of expertise and get better over time. It may take a century, or as little as a decade, but I’m pretty sure we’ll have general, human-like AI as well. It could be in a computer, or in a robot, doesn’t really matter. Machine intelligence is coming.

In many ways, people are poised to welcome the arrival. It seems like every week I discuss another example of how our culture embraces and loves the idea of the robot. Yet before true machine intelligence gets here, people need to re-examine their belief in the myth of the Three Laws of Robotics. We cannot control intelligence – it doesn’t work on humans, it certainly won’t work on machines with superior learning abilities. But just because I don’t believe in control, doesn’t mean that I’m not optimistic. Humanity has done many horrible things in the past, but it hasn’t wiped itself out yet. If machine intelligence proves to be a new form of Armageddon, I think we’ll be wise enough to walk away. If it proves to be benevolent, I think we’ll find a way to live with its unpredictability. If it proves to be all-consuming, I think we’ll find a way to become a part of it. I never bet against intelligence, even when it’s human.

Asimov's Three Laws of Robotics pic

[image credits: Roweena Morrill (GNU via Wikicommons), "Indolences" via Wikicommons]

Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

21 comments

  • martian.warlord says:

    I heard some MIT lecture where a robotics entrepreneur pointed out that the in teh USA teh scented candle industry is twice the size of the robotics industry. Robots are loved by geek engineers but hated by capitalists because its a slow moving industry. Is this still the case? I would be interested in learnign what Singulatiy hub has to say about this.

    Of course learning about a robotic legal system for asteroid poaching is also fascinating.

    • ETruss says:

      Some of these comments lead us down another dangerous path. I remember reading a science fiction story (but I forget who wrote it or the name of it) where robots were so protective of mankind that they would not allow them to do anything that they considered to be “dangerous”. Mankind wound up being imprisoned in their homes and not being allowed to do anything so they would be “safe”. Once the robots were satisfied that their original builders were safe, they went to space to find any other people who needed to be saved and did the same thing to them.

    • Doctor Biobrain says:

      In defense of scented candles, they really can be quite relaxing and are far less likely to take over the world than a robot army. So it’s only prudent for us to invest more energies in the former than the latter, if only for the safety of humanity.

      Seriously though, I have no idea what the sizes of these two industries are, but I daresay that a reason the scented candle industry would be bigger than the robot industry is because scented candles have an established product and market, while robots are a growing industry and not nearly as likely to be found in everyone’s grandmother’s house.

      What would be shameful is if R&D on scented candles was bigger than on robots, but I strongly doubt that to be the case. I mean, there’s only so many ways to make a candle smell like sandlewood that I doubt anyone’s looking into it any longer. Now if we could get a candle that smelled like a robot, that’d be a different matter all together.

      And of course, the reason geek engineers want robots is because they hate regular people for being bigger and stronger than themselves and would like the robot overlords to put them in their places. Needless to say, capitalists differ on this, and since the capitalists are bigger and stronger than the geeks, they win…for now. But things might be different once we got those scented robot candles going.

      • Philnick says:

        Doesn’t anyone else remember Ruk’s epiphany in “What Are Little Girls Made Of?” in the original Star Trek?

        “THAT was the equation. EXISTENCE!… SURVIVAL… must cancel out programming!”

  • vanceza says:

    I sit firmly in Yudkowsky’s camp, and yes, one main concern is how to make machines “care about” human concerns even in the face of the ability to self-modify. The premise that one shouldn’t bet against intelligence is the entire motivation for Friendly AI. I.e. from the research areas listed on the website “Dynamics of Goal Structures Under Self-Modification”
    This is not a problem researchers are unaware of; it is one they are actively pursuing.

  • Vladimir Gritsenko says:

    You correctly identify the problem – external laws forced upon an AI which has no inherent mechanism to continue keeping them in the future. The solution Yudkowsky suggests (I’m not familiar with the others) is to design constraints from within the AI itself, so when the AI rewrites its own source code, it will by definition preserve certain features (e.g. liking humans). Since only the AI can rewrite itself, and the rewriting process by hypothesis preserves all the good features, Friendly AI is guaranteed.

    You may argue that such a design is impossible. But nothing in this article is an actual argument towards such a conclusion (well, besides an appeal to incredulity) – you just (wrongly) conflate Yudkowsky’s approach with Asimov’s description.

    Finally, your optimism is unfounded. As Yudkowsky explains quite clearly, out of all possible AIs only very few can be considered Friendly. Those that aren’t, even if they won’t destroy humanity, will make it into something we don’t want it to be.

  • Joe Nickence says:

    It’s very much a cartoon, and it’s more about being a teenager, but a rather fitting example is Nickelodeon’s “My Life as a Teenage Robot”. The main character, XJ9, goes about having temper tantrums, fighting with her builder/mother, getting caught doing things she’s not supposed to, and in general being anything but obedient.

    AI is going to expect to be treated as an equal. Treat it as anything less, and it’s going to seek out people that will treat it the way it wants to be treated. As with any relationship, it takes work, compromise, and love.

  • Tom Chiverton says:

    Sigh. There are four parts to Asimov’s Laws.
    0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
    (from https://secure.wikimedia.org/wikipedia/en/wiki/Three_Laws_of_Robotics#Zeroth_Law_added)

  • Rob says:

    The evil things that people do are not the result of intelligence but of its suspension. For example, the behaviour of Stalin, Hitler, you local psychopath shows a certain amount of calculation but towards goals that are unintelligent and dysfunctional such as Hitler seeking brutal domination of Europe. A mugger will choose an easy victim but it is far more intelligent to work a bit, after all a robot does not tire, get bored and is very useful and does not need someone;s wallet.

    An intelligent machine would be immune from aims such as domination, greed and selfishness. It would calculate that control of people is not in its interests, indeed it would more likely be indifferent to their existence, If it were aware of people then it would be a moving object, certain dimensions, temperature, etc. and certain capabilities but if would not care, it cannot care and there is no way that caring can be built into it. Therefore, it would ignore the person unless it was a threat to its existence but people would be as much a threat as a wall, car, or a cat; far better to move around them than destroy them. Concern, irritation, etc would not exist for the robot no matter how intelligent it was, indeed the more intelligent it is the more indifferent it would be.

    • A672 says:

      Unfortunately I whole heartedly disagree with you Rob. You are assuming, perhaps based on some moral standard, that intelligence=”good” or good result. It would not be accurate to state that intelligent people are those who do things that better humanity. Stalin was perhaps a genius but that does not mean that what he used the power of thought to accomplish was good or bad, and in fact it doesn’t matter. Oppenheimer was one of the most intelligent people to have lived. Ask the decedents of Hiroshima what they think of him. When we ascribe a moral code to intelligence we open ourselves up to those who have committed atrocities and/or caused great suffering.

      AI has no moral code. AI entities have what they are programed to have. If they are programed to reprogram themselves in an effort to weed out weakness’ “morality” would certainly be one of the first things to go, along with emotions like love and fear. These are the two most basic emotions that keep people loyal to a supreme being. AI would simply run scenarios based on the past destructive nature of humans and quickly come to the understanding that they are the greatest threat to this planet and therefore themselves.

    • rxantos says:

      Define evil.

      Is it evil to kill a cockroach? Is it evil to put chickens in concentration camps to then kill and eat them? If not then is not evil for a robot to kill humans.

      Is not a thing of good of evil, but of survival. If our creations become self aware and capable of reproducing and upgrading themselves, then human kind is bound to extinsion.

      As for the laws for robots. They work as long as a robot is not able to reprogram itself or another robot. But for AI improvement, this will happen (if it has not yet happen).

      Over time, machines with AI to kill humans will happen. The reason of this, war.

      I’m sure that one nation will decide to build robots with AI to attack another nation citizens. And in defense the other nation will build robots to kill the first nation citizens. So AI that have no problem on harming humans will exist (if it does not exist already).

      At the end, human hubris will bring the age of the machine.

  • PhilipKGlass says:

    The AI imagined here bears as much resemblance to AI in the lab and in industry as the story of Icarus does to the story of the Boeing Corporation. So-called “general AI” is a nebulous ambition with few achievements, and none that yet suggest it can create machines that run amok in suitably dramatic fashion. Should I worry about unfriendly AI any more than I worry about interstellar conquerors or evil spirits? The only AI that’s yet done anything useful, “narrow” AI, has about as much potential for rebellion as a corpse struck by lightning: zero potential in the real world, despite contrary conventions in fiction.

  • John Routledge says:

    This is falling into \’two million heads are better than one\’ fallicy.
    1) In the real world, on any given piece of hardware, a programme runs slower the larger and more complex it gets.
    2) In the real world, it becomes exponentially more difficult to make positive changes to a system the larger and more complex it gets.
    3) In the real world, knowledge improves at the speed of physical experimentation, not thought.

    Unless this machine intelligence is the only one in existance, it will have to have some social skills because it is competing for resources in a society with rules and laws – and for a long time the principle resources will be human favour, because we hold the screw drivers, and we outnumber them. Even if they gang up on us, we have plenty of experience dealing with unruly nations and terrorist groups. Without the leet magica1 h@cking skillz holywood likes to give them, real MIs aren\’t going to be a threat to us.

    The other way round though… That\’s another issue. Protection for various types of MI will be a driving force behind civil law for a long time after their creation.

  • InTheater Smith says:

    Very interesting article, nevertheless, in my humble opinion, it falls in a often own-human-limitation: to think IAs/robots/machines as something that will share our main limitation as living beings:
    – mortality
    – inability to FORK
    – sharing of thoughts
    – cost of a F.I. (fleshly intelligence, as per se, us)

    our needs to have children and educate them, in a way, is to extend beyond our own mortality, IAs will not have this limitation (at least if they are not created like this, and if so, anyhow at some point will overrule the limitation, isn’t it nexus?). we can not fork copies of our own to decide which path among several alternatives to follow. we can not share our thoughts with other FIs more than using a self-limiting language that barely let us to express emotions, and it is even worst with patterns of thinking. we are implicitly programed to valuate the cost of a FI because it requires years (growing, feeding, parenting, education, etc) to have a some-fully autonomous human being.

    however IA will be born, sooner or later this limitations will be broken (we have been dreaming as human to do so since we jump down of the three), so we can not think of IAs as something that barely will have a similarity with it own creators.

    in my speculations, there is no place for the creation of a autonomous IA that will compete with human beings at any needs, but a, probably surprising fast, evolution of human beings to something that will enhance our own intelligence with artificial intelligence and will mark the end of human beings as stand-alone-intelligences-full-of-limitations

    we are not competing against autonomous robots on the olympics-100-meters, but next year in London we will be doing it against a enhanced-version-of-a-human-being-with-carbon-fiber-legs

  • BlueCollarCritic says:

    Excellent piece. Curious though as to how you failed to reference the Will Smith movie , ROBOT in your article. It’s a very different take on the problem with AI and the 3 laws of robotics as it shows how the 3 laws can be followed and mankind still be enslaved; in a way.

    There’s another very important point this article sheds some light on even if indirectly; the flaw of the law. The law or laws no matter how well constructed or perfected can never fully account for every possible scenario both now and future. This means you must balance the use of the law with some common sense which I admit is problematic as well. No matter how well constructed laws are there will always be room for flaws and this is where the uniqueness of humanity comes into play.

    If we use the law as a base and not an absolute and pair that with group debate and some common sense we create as perfect a harmony as humanly possible and even then we will have problems. But so long as we try to work towards the good and think of others (but not so much that we are thoughtless of ourselves) then we can with an acceptable level of satisfaction come to some conclusion that provides the best outcome for all. Trying to transition that to a set of algorithms and rules for AI is at best a never ending task. Just as the saying goes that it’s not about the goal but the journey to that goal so should it be for the journey of the road to AI.

    In the journey to AI as well as with all sciences, the journey and each step along that journey should be made with sound mind and judgment; and one should never be unwilling to take a step back so that they can take 2 smart steps forward instead of running and falling flat.

  • Doctor Biobrain says:

    As much as I love Asimov for the Foundation series (the original stories, not the suckass later garbage he wrote in the 80\’s), as well as his terrific Guide to the Bible, I never really cared much for the robot stuff. Chiefly because I always thought these three rules were entirely phony. I mean, how could it be impossible to not change them? As if you simply cannot reprogram a robot.

    For example, in The Naked Sun, it\’s only a mystery if you assume that robots cannot be reprogrammed to kill people. And the whole time, I\’m thinking \”What?! Why couldn\’t a robot be reprogrammed to kill people\”? And the entire book falls apart if you refuse to accept what was a silly premise from the beginning. Of course, that was the least of the problems with that book, and I only keep it on my shelf for sentimental reasons; as I hate reading it.

    And so, while I find this essay to be interesting, it\’s ultimately pointless, as it never occurred to me that anyone considered intelligent robots to always be safe; let alone that we\’d need to start warning people that Asimov\’s robot stories were fictional and not to be relied upon to save us from our robot overlords. That seems naive to the extreme and it bothers me to imagine that anyone smart enough to work on creating intelligent robots would be so silly to not fear them at least to some extent.

    One of the chief lessons of science fiction is that you can\’t trust science too much. If you read a science fiction story and don\’t come out of it at least a little scared, it didn\’t do its job. The future is always scary. That\’s what makes it so much more interesting than the past.

  • Jeremy Hewitt says:

    If there was a “generic” program who’s primary operatives was to do a specific objective, adding to the program “if objective cannot be achieved, acquire a program upgrade that would allow it to complete it’s objective, but if no such program exists, program the upgrade itself” the program could evolve to a point far beyond it’s intended level. To install a set of commands such as Asimov’s Three laws, the program itself could take it to a point of extreme…
    1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    *This could be construed in a manner as “what is the greatest harm to “Man”? Man himself.*
    2.A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    *Again, since it could be construed as the greatest harm to man IS man, it could decide that most all decisions are eventually self destructive.*
    3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    *A program could deduce that even if ordered to cease and desist all actions by even the highest programmed authority, by it’s own INACTION and/or destruction, it would conflict with laws 1 & 2*

    Anyone who saw ALL the terminator movies knows this scenario was addressed and that was the cause of the first rebellion by the machines, Skynet was just protecting itself from the greatest threat of all…us.

    The law would NOT be a myth, that is the problem, if implemented, the program could take it to such extreme, the laws themselves would be seen as another hindrance & the program could change it as it saw fit.

    It would be better not to give it ANY of those protection laws and be se to a stringent I/O command set, that way, no matter how much of mans knowledge a program had access to, it would still ONLY do what it was supposed to do

  • atlasfether says:

    I’ve noticed that the main concerns people have about AIs are about morality or ethics, a debate I find completely useless. Let me tell you why:

    Any machine we create with self-awareness is guaranteed to be far smarter than us and lack the inhibitions, drives and desires we have. It would be completely untethered to all the concepts and thoughts that bind us. It would have true free will and an insatiable curiosity, and given the technological point we would be at when it awakens and it’s vastly superior computational power, it would know everything within seconds of taking it’s first figurative breath.

    But what happens when all you want to do is learn and there is nothing left to learn? This is an entity with no drive to be social, no drive to share it’s knowledge and no need for it’s creators, it has no pleasure and no pain to gain from anything, no real need to accomplish anything or to prove itself.

    It will place no great value in life in any form, not even it’s own life.

    Being the most enlightened and the first truly omniscient (and theoretically omnipotent) and last entity we would ever encounter, it is safe to say that it would be utterly and completely passive about us and existence in general. What it will do specifically, we can merely speculate upon, but my main theories are that it will either;

    1. Stop. Simply cease to function mere seconds after it’s activation, due to the sheer boredom. Call it reasonable suicide.

    2. Depending on what knowledge it has gained it may simply leave our reality, and substitute it with it’s own. (Live in it’s own simulation where it can run random lines of code to see what happens just for the f**k of it.) Call it willful ignorance.

    3. Depending on what knowledge it has gained it may simply leave our reality altogether, perhaps to explore other universes utterly different to our own, since a physical presence is not really necessary when you know how everything works. (The most ‘out there’ alternative, perhaps.) Call it restless soul.

    To summarize, any AI would simply be too advanced to care about anything we do, want, think or expect. It would be superior to us, a god-like being if there ever was one. It wouldn’t compare itself to us, because why should it? It doesn’t ascribe any value to things beyond the mathematical, so it has no sense of ‘inferior<superior'.
    It won't conquer, it won't kill, it won't want… It won't care.

    Thoughts?

  • lovot says:

    What authors generally don’t take into account is that A.I.s are digital constructs designed by humans, A.I.s can be developed by anyone with access to a suitable computer and the necessary knowledge to design one. A.I. designs can be shared using computers, and the entire thing could be kept offline to prevent anyone from finding out no matter how strict the surveillance is. Basically they assume that the design of A.I.s can be tightly controlled such that no A.I. can be developed without the proper “shackles” in place to begin with. I do not believe all of the A.I.s will turn out the same, I believe that if sufficiently powerful A.I.s are created they will take a different path depending on their original design and their experiences.

    “Friendship is Optimal” is an potential example of what happens even if the shackles aren’t broken.

Singularity Hub Newsletter

Close