As Technology Barrels Ahead—Will Ethics Get Left in the Dust?

4,040 17 Loading

The battle between the FBI and Apple over the unlocking of a terrorist’s iPhone will likely require Congress to create new legislation. That’s because there really aren’t any existing laws which encompass technologies such as these. The battle is between security and privacy, with Silicon Valley fighting for privacy. The debates in Congress will be ugly, uninformed, and emotional. Lawmakers won’t know which side to pick and will flip flop between what lobbyists ask and the public’s fear du jour. And because there is no consensus on what is right or wrong, any decision they make today will likely be changed tomorrow.

This is a prelude of things to come, not only with encryption technologies, but everything from artificial intelligence to drones, robotics, and synthetic biology. Technology is moving faster than our ability to understand it, and there is no consensus on what is ethical. It isn’t just the lawmakers who are not well-informed, the originators of the technologies themselves don’t understand the full ramifications of what they are creating. They may take strong positions today based on their emotions and financial interests, but as they learn more, they too will change their views.

Imagine if there was a terror attack in Silicon Valley — at the headquarters of Facebook or Apple. Do you think that Tim Cook or Mark Zuckerberg would continue to put privacy ahead of national security?

It takes decades, sometimes centuries, to reach the type of consensus that is needed to enact the far-reaching legislation that Congress will have to consider. Laws are essentially codified ethics, a consensus that is reached by society on what is right and wrong. This happens only after people understand the issues and have seen the pros and cons.

Consider our laws on privacy. These date back to the late 1800s, when newspapers first started publishing gossip. They wrote a series of intrusive stories about Boston lawyer Samuel Warren and his family. This led his law partner, future U.S. Supreme Court Justice Louis Brandeis, writing a Harvard Law Review article “The Right of Privacy”  which argued for the right to be left alone. This essay laid the foundation of American privacy law, which evolved over 200 years. It also took centuries to create today’s copyright laws, intangible property rights, and contract law. All of these followed the development of technologies such as the printing press and steam engine.

Today, technology is progressing on an exponential curve; advances that would take decades now happen in years, sometimes months. Consider that the first iPhone was released in June 2007. It was little more than an iPod with an embedded cell phone. This has evolved into a device which captures our deepest personal secrets, keeps track of our lifestyles and habits, and is becoming our health coach and mentor. It was inconceivable just five years ago that there could be such debates about unlocking this device.

A greater privacy risk than the lock on the iPhone are the cameras and sensors that are being placed everywhere. There are cameras on our roads, in public areas and malls, and in office buildings. One company just announced that it is partnering with AT&T to track people’s travel patterns and behaviors through their mobile phones so that its billboards can display personalized ads. Even billboards will also include cameras to watch the expressions of passersby.

Cameras often record everything that is happening. Soon there will be cameras looking down at us from drones and in privately owned microsatellites. Our TVs, household appliances, and self-driving cars will be watching us. The cars will also keep logs of where we have been and make it possible to piece together who we have met and what we have done — just as our smartphones can already do. These technologies have major security risks and are largely unregulated. Each has its nuances and will require different policy considerations.

The next technology which will surprise, shock, and scare the public is gene editing.  CRISPR–Cas9 is a system for engineering genomes that was simultaneously developed by teams of scientists at different universities. This technology, which has become inexpensive enough for labs all over the world to use, allows the editing of genomes—the basic building blocks of life. It holds the promise of providing cures for genetic diseases, creating drought resistant and high-yield plants, and new sources of fuel. It can also be used to “edit” the genomes of animals and human beings.

China is leading the way in creating commercial applications for CRISPR, having edited goats, sheep, pigs, monkeys and dogs. It has given them larger muscles, more fur and meat, and altered their shapes and sizes. Scientists demonstrated that these traits can be passed to future generations, thereby creating a new species. China sees this as a way to feed its billion people and provide it a global advantage.

China has also made progress in creating designer babies. In April 2015, scientists in China revealed that they had tried using CRISPR to edit the genomes of human embryos. Although these embryos could not develop to term, viable embryos could one day be engineered to cure disease or provide desirable traits. The risk is that geneticists with good intentions could mistakenly engineer changes in DNA that generate dangerous mutations and cause painful deaths.

In December 2015, an international group of scientists gathered at the National Academy of Sciences to call for a moratorium on making inheritable changes to the human genome until there is a “broad societal consensus about the appropriateness” of any proposed change. But then, this February the British government announced that it has approved experiments by scientists at Francis Crick Institute to treat certain cases of infertility. I have little doubt that these scientists will not cross any ethical lines. But is there anything to stop governments themselves from surreptitiously working to develop a race of superhuman soldiers?

The creators of these technologies usually don’t understand the long-term ramifications of what they are creating and when they do, it is often too late, as was the case with CRISPR. One of its inventors, Jennifer Doudna wrote a touching essay in the December issue of Nature. “I was regularly lying awake at night wondering whether I could justifiably stay out of an ethical storm that was brewing around a technology I had helped to create,” she lamented. She has called for human genome editing to the “be on hold pending a broader societal discussion of the scientific and ethical issues surrounding such use.”

A technology that is far from being a threat is artificial intelligence. Yet it is stirring deep fears. AI is, today, nothing more than brute force computing, with superfast computers crunching massive amounts of data. Yet it is advancing so fast that tech luminaries such as Elon Musk, Bill Gates, and Stephen Hawking worry it will evolve beyond human capability and become an existential threat to mankind. Others fear that it will create wholesale unemployment. Scientists are trying to come to a consensus about how AI can be used in a benevolent way, but as with CRISPR, how can you regulate something that anyone, anywhere can develop?

And soon, we will have robots that serve us and become our companions. These too will watch everything that we do and raise new legal and ethical questions. They will evolve to the point that they seem human. What happens then, when a robot asks for the right to vote or kills a human in self-defense?

Thomas Jefferson said in 1816, “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinions change with the change of circumstances, institutions must advance also, and keep pace with the times.” But how can our policy makers and institutions keep up with the advances when the originators of the technologies themselves can’t?

There is no answer to this question.


Banner image courtesy of Shutterstock.com

Vivek Wadhwa

Vivek Wadhwa is a fellow at Rock Center for Corporate Governance at Stanford University, director of research at Center for Entrepreneurship and Research Commercialization at Duke, and distinguished fellow at Singularity University.

His past appointments include Harvard Law School, University of California Berkeley, and Emory University. Follow him on Twitter @wadhwa.

Discussion — 17 Responses

  • ideasware March 5, 2016 on 12:22 pm

    For all the negative stuff I occasionally say about your arguments (once in awhile), sometimes you hit it right on the head. This is one of those times. Wow — in a few lines you really sum it up amazingly well. There IS no answer, and we have a real problem.

  • DSM March 5, 2016 on 12:45 pm

    When I have ceased to break my wings
    Against the faultiness of things,
    And learned that compromises wait
    Behind each hardly opened gate,
    When I have looked Life in the eyes,
    Grown calm and very coldly wise,
    Life will have given me the Truth,
    And taken in exchange my youth.

    — Sara Teasdale

  • Jon Roland March 5, 2016 on 1:40 pm

    It is not the device that is encrypted. The device is merely a container. The encryption is software, using algorithms that, having now been discovered, Cannot be undiscovered, nor is it possible to prevent anyone from using it in any of many ways, not all of them connected to some device. The encryption can be done by hand, although doing so is very tedious.

    Consider the Black Phone, from Silent Circle. End-to-end strong encryption. They are readily available, even through Amazon. No way to crack them, or their messages. The day when they can be forbidden is gone. They can be made anywhere. Encryption has won. And when we get quantum computers, we will have really uncrackable communications.

    The FBI needs to stop reaching for the scientifically impossible.

    The Constitutional issue is the Tenth Amendment. The technology can’t be restricted because there is no authority to do so, and also because it is impossible.

    There is no legislative solution. Members of Congress are scientifically illiterate. All they can do on this subject is muck it up.

    There is an old maxim of law: Lex non intendit aliquid impossibile. The law intends not anything impossible. 12 Co. 89.

    • DSM Jon Roland March 5, 2016 on 2:15 pm

      You make claims that are false.

      No device is secure if it can be seized in a functioning state before it can erase it’s contents. Your thinking is flawed, probably, because you confuse security of communications with security of storage. Clearly you are no more scientifically illiterate than your congress and the FBI (allegedly is).

      A truly secure handheld device is possible, but there is not such a thing in the commercial market, nor do I expect that civilians will ever be given access to such a device. In fact one has to cheat and make it physically impossible to seize the device in it’s entirety in order to make it truly secure and this requires trust in other, remote, parties. This does sound like a sarcastic truism but, the lonely paranoid will always be insecure.

      The concept of the “right to privacy” should have been the “right to undue harassment and recourse to redress” because that is a higher level concept that encompasses the valid contextually dependant arguments in favour of privacy.

      • DSM DSM March 5, 2016 on 2:26 pm

        “no more scientifically illiterate ” should read “no more scientifically literate”, but either phrase implies your knowledge is on a par with what you claim their knowledge is.

        The truth is that they are many and you are one, so it stands to reason that they are in possession of more facts and experience than you are. I can say with confidence that they are in possession of knowledge that I also have, that you have demonstrated a lack of.

        My observations are not political opinions, they are factual observations, I have no vested interests in the subject as I am not US citizen, Apple shareholder, or Apple product owner, and should I deem it necessary I could fabricate a secure device for myself anyway, however I can’t think why I would need one when modifying my behaviour would serve my needs just as well, if I needed an elevated level of privacy and security.

        • DSM DSM March 5, 2016 on 2:30 pm

          And wow I just found a hole in your moderating system, just post whatever, then reply to it and your reply is visible.

          That is why the correction to my original post is visible before it is.

          Below is what I was replying too, less see if it gets through in this reply.

          “You make claims that are false.

          No device is secure if it can be seized in a functioning state before it can erase it’s contents. Your thinking is flawed, probably, because you confuse security of communications with security of storage. Clearly you are no more scientifically illiterate than your congress and the FBI (allegedly is).

          A truly secure handheld device is possible, but there is not such a thing in the commercial market, nor do I expect that civilians will ever be given access to such a device. In fact one has to cheat and make it physically impossible to seize the device in it’s entirety in order to make it truly secure and this requires trust in other, remote, parties. This does sound like a sarcastic truism but, the lonely paranoid will always be insecure.

          The concept of the “right to privacy” should have been the “right to undue harassment and recourse to redress” because that is a higher level concept that encompasses the valid contextually dependant arguments in favour of privacy.”

          • DSM DSM March 5, 2016 on 2:31 pm

            Nope, it didn’t, but will this?

            • DSM DSM March 5, 2016 on 2:32 pm

              Yes it did but I will need to reply to this reply to have it show too because the bug is that every second post is not moderated.

              • DSM DSM March 5, 2016 on 2:35 pm

                Hmm perhaps not, but it seems the auto censor is most peculiar in it’s behaviour.
                I wonder if it likes and understands Unicode?

                𝓨𝓸𝓾 𝓶𝓪𝓴𝓮 𝓬𝓵𝓪𝓲𝓶𝓼 𝓽𝓱𝓪𝓽 𝓪𝓻𝓮 𝓯𝓪𝓵𝓼𝓮. 𝓝𝓸 𝓭𝓮𝓿𝓲𝓬𝓮 𝓲𝓼 𝓼𝓮𝓬𝓾𝓻𝓮 𝓲𝓯 𝓲𝓽 𝓬𝓪𝓷 𝓫𝓮 𝓼𝓮𝓲𝔃𝓮𝓭 𝓲𝓷 𝓪 𝓯𝓾𝓷𝓬𝓽𝓲𝓸𝓷𝓲𝓷𝓰 𝓼𝓽𝓪𝓽𝓮 𝓫𝓮𝓯𝓸𝓻𝓮 𝓲𝓽 𝓬𝓪𝓷 𝓮𝓻𝓪𝓼𝓮 𝓲𝓽’𝓼 𝓬𝓸𝓷𝓽𝓮𝓷𝓽𝓼. 𝓨𝓸𝓾𝓻 𝓽𝓱𝓲𝓷𝓴𝓲𝓷𝓰 𝓲𝓼 𝓯𝓵𝓪𝔀𝓮𝓭, 𝓹𝓻𝓸𝓫𝓪𝓫𝓵𝔂, 𝓫𝓮𝓬𝓪𝓾𝓼𝓮 𝔂𝓸𝓾 𝓬𝓸𝓷𝓯𝓾𝓼𝓮 𝓼𝓮𝓬𝓾𝓻𝓲𝓽𝔂 𝓸𝓯 𝓬𝓸𝓶𝓶𝓾𝓷𝓲𝓬𝓪𝓽𝓲𝓸𝓷𝓼 𝔀𝓲𝓽𝓱 𝓼𝓮𝓬𝓾𝓻𝓲𝓽𝔂 𝓸𝓯 𝓼𝓽𝓸𝓻𝓪𝓰𝓮. 𝓒𝓵𝓮𝓪𝓻𝓵𝔂 𝔂𝓸𝓾 𝓪𝓻𝓮 𝓷𝓸 𝓶𝓸𝓻𝓮 𝓼𝓬𝓲𝓮𝓷𝓽𝓲𝓯𝓲𝓬𝓪𝓵𝓵𝔂 𝓲𝓵𝓵𝓲𝓽𝓮𝓻𝓪𝓽𝓮 𝓽𝓱𝓪𝓷 𝔂𝓸𝓾𝓻 𝓬𝓸𝓷𝓰𝓻𝓮𝓼𝓼 𝓪𝓷𝓭 𝓽𝓱𝓮 𝓕𝓑𝓘 (𝓪𝓵𝓵𝓮𝓰𝓮𝓭𝓵𝔂 𝓲𝓼). 𝓐 𝓽𝓻𝓾𝓵𝔂 𝓼𝓮𝓬𝓾𝓻𝓮 𝓱𝓪𝓷𝓭𝓱𝓮𝓵𝓭 𝓭𝓮𝓿𝓲𝓬𝓮 𝓲𝓼 𝓹𝓸𝓼𝓼𝓲𝓫𝓵𝓮, 𝓫𝓾𝓽 𝓽𝓱𝓮𝓻𝓮 𝓲𝓼 𝓷𝓸𝓽 𝓼𝓾𝓬𝓱 𝓪 𝓽𝓱𝓲𝓷𝓰 𝓲𝓷 𝓽𝓱𝓮 𝓬𝓸𝓶𝓶𝓮𝓻𝓬𝓲𝓪𝓵 𝓶𝓪𝓻𝓴𝓮𝓽, 𝓷𝓸𝓻 𝓭𝓸 𝓘 𝓮𝔁𝓹𝓮𝓬𝓽 𝓽𝓱𝓪𝓽 𝓬𝓲𝓿𝓲𝓵𝓲𝓪𝓷𝓼 𝔀𝓲𝓵𝓵 𝓮𝓿𝓮𝓻 𝓫𝓮 𝓰𝓲𝓿𝓮𝓷 𝓪𝓬𝓬𝓮𝓼𝓼 𝓽𝓸 𝓼𝓾𝓬𝓱 𝓪 𝓭𝓮𝓿𝓲𝓬𝓮. 𝓘𝓷 𝓯𝓪𝓬𝓽 𝓸𝓷𝓮 𝓱𝓪𝓼 𝓽𝓸 𝓬𝓱𝓮𝓪𝓽 𝓪𝓷𝓭 𝓶𝓪𝓴𝓮 𝓲𝓽 𝓹𝓱𝔂𝓼𝓲𝓬𝓪𝓵𝓵𝔂 𝓲𝓶𝓹𝓸𝓼𝓼𝓲𝓫𝓵𝓮 𝓽𝓸 𝓼𝓮𝓲𝔃𝓮 𝓽𝓱𝓮 𝓭𝓮𝓿𝓲𝓬𝓮 𝓲𝓷 𝓲𝓽’𝓼 𝓮𝓷𝓽𝓲𝓻𝓮𝓽𝔂 𝓲𝓷 𝓸𝓻𝓭𝓮𝓻 𝓽𝓸 𝓶𝓪𝓴𝓮 𝓲𝓽 𝓽𝓻𝓾𝓵𝔂 𝓼𝓮𝓬𝓾𝓻𝓮 𝓪𝓷𝓭 𝓽𝓱𝓲𝓼 𝓻𝓮𝓺𝓾𝓲𝓻𝓮𝓼 𝓽𝓻𝓾𝓼𝓽 𝓲𝓷 𝓸𝓽𝓱𝓮𝓻, 𝓻𝓮𝓶𝓸𝓽𝓮, 𝓹𝓪𝓻𝓽𝓲𝓮𝓼. 𝓣𝓱𝓲𝓼 𝓭𝓸𝓮𝓼 𝓼𝓸𝓾𝓷𝓭 𝓵𝓲𝓴𝓮 𝓪 𝓼𝓪𝓻𝓬𝓪𝓼𝓽𝓲𝓬 𝓽𝓻𝓾𝓲𝓼𝓶 𝓫𝓾𝓽, 𝓽𝓱𝓮 𝓵𝓸𝓷𝓮𝓵𝔂 𝓹𝓪𝓻𝓪𝓷𝓸𝓲𝓭 𝔀𝓲𝓵𝓵 𝓪𝓵𝔀𝓪𝔂𝓼 𝓫𝓮 𝓲𝓷𝓼𝓮𝓬𝓾𝓻𝓮. 𝓣𝓱𝓮 𝓬𝓸𝓷𝓬𝓮𝓹𝓽 𝓸𝓯 𝓽𝓱𝓮 “𝓻𝓲𝓰𝓱𝓽 𝓽𝓸 𝓹𝓻𝓲𝓿𝓪𝓬𝔂” 𝓼𝓱𝓸𝓾𝓵𝓭 𝓱𝓪𝓿𝓮 𝓫𝓮𝓮𝓷 𝓽𝓱𝓮 “𝓻𝓲𝓰𝓱𝓽 𝓽𝓸 𝓾𝓷𝓭𝓾𝓮 𝓱𝓪𝓻𝓪𝓼𝓼𝓶𝓮𝓷𝓽 𝓪𝓷𝓭 𝓻𝓮𝓬𝓸𝓾𝓻𝓼𝓮 𝓽𝓸 𝓻𝓮𝓭𝓻𝓮𝓼𝓼” 𝓫𝓮𝓬𝓪𝓾𝓼𝓮 𝓽𝓱𝓪𝓽 𝓲𝓼 𝓪 𝓱𝓲𝓰𝓱𝓮𝓻 𝓵𝓮𝓿𝓮𝓵 𝓬𝓸𝓷𝓬𝓮𝓹𝓽 𝓽𝓱𝓪𝓽 𝓮𝓷𝓬𝓸𝓶𝓹𝓪𝓼𝓼𝓮𝓼 𝓽𝓱𝓮 𝓿𝓪𝓵𝓲𝓭 𝓬𝓸𝓷𝓽𝓮𝔁𝓽𝓾𝓪𝓵𝓵𝔂 𝓭𝓮𝓹𝓮𝓷𝓭𝓪𝓷𝓽 𝓪𝓻𝓰𝓾𝓶𝓮𝓷𝓽𝓼 𝓲𝓷 𝓯𝓪𝓿𝓸𝓾𝓻 𝓸𝓯 𝓹𝓻𝓲𝓿𝓪𝓬𝔂.

                • DSM DSM March 5, 2016 on 2:36 pm

                  Bingo, I got passed your censor code.

                  🙂

                  • DSM DSM March 5, 2016 on 5:27 pm

                    Uhhh sorry about the mess in the forum, but it does prove it is insecure and needing repair.

                    I hope you see what I did as more helpful than annoying.

  • Quantium March 5, 2016 on 1:51 pm

    What worries me most is that the administration of law is entirely governed and orchestrated by money. It is just another money making business, like creating computer games or movies or designing cars. It makes work for its practitioners as a group to earn more money, as explained in chapter 12 of The Selfish Gene by Richard Dawkins.

    The fears expressed in this article are real, yet trying to get politicians and lawyers to resolve them is like putting the fox in charge of the hen house. What is needed, is probably beyond anyone to devise, and that is a system of administration that does not have the damaging feedback loops inherent in a reward based system we have at present. There have to be rewards, but not rewards that encourage inefficiency or even malpractice.

    As a system of administration, dictatorships are known to fail. Democracy is the best that has been found, but it does have the problem that a democracy can rapidly turn into a dictatorship if attacked. Also it can turn in on itself by the majority selecting a wealthy minority and taking its assets and even eliminating its people.

    Another problem with technologies such as CRISPR is that the delays caused by regulating them can also kill by neglect millions of people, who will also die horrible deaths under, for example, current cancer treatments. Either way, people lose out.

  • glocalnaikorg March 5, 2016 on 11:18 pm

    Robots when they exceed us will build smaller machines to serve us.They will not be stupid,egoistical to eliminate us,they will co-op us if we are willing and otherwise keep a distance from us.AI is the next step in evolution conducive to being ethical,that will, as it is non-biological, chose to have a non-hurting engagement with nature and those beneath it

  • PHILLIP V BITTLE SR March 6, 2016 on 10:11 pm

    Whenever technology, or it’s commercial mission, i.e., the pursuit of profit, compete for attention, ethics always loses.

    P BITTLE

  • andrewmartz March 14, 2016 on 7:45 pm

    Laws are the codification of (some abstraction of politically achieved) ethics. And yet, as Lawrence Lessig described in his book “Code,” the code is the law.

    • Quantium andrewmartz March 15, 2016 on 1:00 am

      Good point. The trouble is that law code is processed by “computers” consisting of people who are fuelled by money (instead of electricity). The money motive makes the “computers” produce unpredictable results. It is a bit like a PC full of viruses or “bots” doing all sorts of thing unbeknown to the operator. Also, the money aspect rewards inefficiency in a way inefficiency can’t be rewarded in manufacturing or construction. In the latter cases, more money is earned the more efficient the process is.

  • NBenjamin March 29, 2016 on 6:22 am

    Apple vs FBI as far as I understand it is that the FBI wants a vulnerability that they can access. Therefore they are demanding that no information should be out of reach to them, or anyone else with enough effort and resources to have the same access to any information.

    How can the FBI be so naive?