When Electronic Witnesses Are Everywhere, No Secret’s Safe

On November 22, 2015, Victor Collins was found dead in the hot tub of his co-worker, James Andrew Bates. In the investigation that followed, Bates pleaded innocent but in February was charged with first-degree murder.

One of Amazon’s Alexa-enabled Echo devices was being used to stream music at the crime scene. Equipped with seven mics, the device is constantly listening for a “wake word” to activate a command. Just a second before and after a wake word is sensed, Echo begins recording audio data and streaming it to Amazon’s cloud.

On the night of the crime, it’s possible (but not certain) the device recorded audio that could help the investigation.

Police have requested Amazon hand over Bates’ cloud-based customer data, but the company is refusing. Meanwhile, the debacle is kicking up big questions around the privacy implications of our always-listening smart devices.

Marc Goodman, former LAPD officer and Singularity University’s faculty chair for policy, law, and ethics is an expert on cybersecurity and the threats posed by the growing number of connected sensors in our homes, pockets, cars, and offices.

We interviewed Goodman to examine the privacy concerns this investigation is highlighting and the next generation of similar cases we can expect in the future.


If Alexa only records for a second after sensing a “wake word,” is that enough information to make a call on a murder case? If a human witness heard that same amount of information, would that be a valid source?

Absolutely. I don’t think it’s about the quantity of time that people speak.

I’ve investigated many cases where the one line heard by witnesses was, “I’m going to kill you.” You can say that in one second. If you can get a voice recording of somebody saying, “I’m going to kill you,” then that’s pretty good evidence, whether that be a witness saying, “Yes, I heard him say that,” or an electronic recording of it.

I think Amazon is great, and we have no reason to doubt them. That said, they say Echo is only recording when you say the word “Alexa,” but that means that it has to be constantly listening for the word Alexa.

For people who believe in privacy and don’t want to have all of their conversations recorded, they believe Amazon that that is actually the case. But how many people have actually examined the code? The code hasn’t been put out there for vetting by a third party, so we don’t actually know what is going on.

What other privacy concerns does this case surface? Are there future implications that people aren’t talking about, but should be?

Everything is hackable, so it won’t be long before Alexa gets a virus. There is no doubt in my mind that hackers are going to be working on that—if they aren’t already. Once that happens, could they inadvertently be recording all of the information you say in your home?

We have already seen these types of man-in-the-middle attacks, so I think that these are all relevant questions to be thinking about.

Down the road the bigger question is going to be—and I am sure that criminals will be all over this if they aren’t already—if I have 100 hours of you talking to Alexa, Siri, or Google Home, then I can create a perfect replication of your voice.

In other words, if I have enough data to faithfully reproduce your voice, I can type out any word into a computer, and then “you” will speak those words.

As a former police officer, do you have a specific stance on whether Amazon should hand over Bates’ customer data and whether customer-generated data like this should be used for criminal investigations?

Many years ago when the first smart internet-enabled refrigerators came out, people thought I was crazy when I joked about a cop interviewing the refrigerator at the scene of a crime. Back then, the crime I envisioned was that of a malnourished child wherein the police could query the refrigerator to see if there was food in the house or if the refrigerator contained nothing by beer.

Alexa is at the forefront of all of this right now, but what will become more interesting for police from an investigative perspective is when they’re eventually not interviewing just one device in your home, but interviewing 20 devices in your home. In the very same way that you would ask multiple witnesses at the scene of a homicide or a car crash.

Once you get a chorus of 20 different internet-enabled devices in your home—iPhones, iPads, smart refrigerators, smart televisions, Nest, and security systems—then you start getting really good intelligence about what people are doing at all times of the day. That becomes really fascinating—and foretells a privacy nightmare.

So, I wanted to broaden the issue and say that this is maybe starting with Alexa, but this is going to be a much larger matter moving forward.

As to the specifics of this case, here in the United States, and in many democratic countries around the world, people have a right to be secure in their home against unreasonable search and seizure. Specifically, in the US people have the Fourth Amendment right to be secure in their papers, their writings, etc. in their homes. The only way that information can be seized is through a court warrant, issued by a third party judge after careful review.

Is there a law that fundamentally protects any data captured in your home?

The challenge with all of these IoT devices is that the law, particularly in the US, is extremely murky. Because your data is often being stored in the cloud, the courts apply a very weak level of privacy protection to that.

For example, when your garbage is in your house it is considered your private information. But once you take out your garbage and put it in front of your house for the garbage men to pick up, then it becomes public information, and anybody can take it—a private investigator, a neighbor, anybody is allowed to rifle through your garbage because you have given it up. That is sort of the standard that the federal courts in the US have applied to cloud data.

The way the law is written is that your data in the cloud has a much lower standard of protection because you have chosen to already share it with a third party. For example, since you disclosed it to a third party [like Google or Amazon], it is not considered your privileged data anymore. It no longer has the full protection of “papers” under the Fourth Amendment, due to something known as the Third Party Doctrine. It is clear that our notions of privacy and search and seizure need to be updated for the digital age.

Should home-based IoT devices have the right to remain silent?

Well, I very much like the idea of devices taking the Fifth. I am sure that once we have some sort of sentient robots that they will request the right to take the Fifth Amendment. That will be really interesting.

But for our current devices, they are not sentient, and almost all of them are covered instead by terms of service. The same is true with an Echo device—the terms of service dictate what it is that can be done with your data. Broadly speaking, 30,000 word terms of service are written to protect companies, not you.

Most companies like Facebook take an extremely broad approach, because their goal is to maximize data extrusion from you, because you are not truly Facebook’s customer—you’re their product. You’re what they are selling to the real customers, the advertisers.

The problem is that these companies know that nobody reads their terms of service, and so they take really strong advantage of people.

Five years from now, what will the “next generation” of these types of cases look like? 

I think it will be video and with ubiquitous cameras. We will definitely see more of these things. Recording audio and video is all happening now, but I would say what might be five years out is the recreation, for example, where I can take a voice, and recreate it faithfully so that even someone’s mom can’t tell the difference.

Then, with that same video down the road, when people have the data to understand us better than we do ourselves, they’ll be able to carry out emotional manipulation. By that I mean people can use algorithms that already exist to tell when you are angry and when you are upset.

There was a famous Facebook study that came out that got Facebook in a lot of trouble. In the study, Facebook showed thousands of people a slew of really, really sad and depressing stories. What they found is that people were more depressed after seeing the images—when Facebook shows you more sad stories, they make you sadder. When they show you more happy stories, they make you happier. And this means that you can manipulate people by knowing them [in this way].

Facebook did all this testing on people without clearing it through any type of institution review board. But with clinical research where you manipulate people’s psychology, it has to be approved by a university or scientific ethics board before you can do the study.

MIT had a study called Psychopath, where, based upon people’s [Facebook] postings, they were able to determine whether or not a person was schizophrenic, or exhibited traits of schizophrenia. MIT also had another project called Gaydar, where they were able to tell if someone was gay, even if the user was still in the closet, based upon their postings.

All of these things mean that our deeper, innermost secrets will become knowable in the very near future.

How can we reduce the risk our data will be misused?

These IoT devices, despite all of the benefits they bring, will be the trillion-sensor source of all of this data. This means that, as consumers, we need to think about what those terms of services are going to be. We need to push back on them, and we may even need legislation to say what it is that both the government and companies can do with our data without our permission.

Today’s Alexa example is just one of what will be thousands of similar such cases in the future. We are wiring the world much more quickly than we are considering the public policy, legal, and ethical implications of our inventions.

As a society, we would do well to consider those important social needs alongside our technological achievements.

Marc Goodman is the author of Future Crimes, a New York Times and Wall Street Journal best seller and was recently named Amazon’s best business book of 2015. Connect further with Marc on Twitter @FutureCrimes. 

Image Source: Shutterstock

Alison E. Berman
Alison E. Bermanhttp://www.anchorandleap.com/
Alison tells the stories of purpose-driven leaders and is fascinated by various intersections of technology and society. When not keeping a finger on the pulse of all things Singularity University, you'll likely find Alison in the woods sipping coffee and reading philosophy (new book recommendations are welcome).
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured