Explore Topics:
AIBiotechnologyRoboticsComputingFutureScienceSpaceEnergyTech
Artificial Intelligence

Will Facial Recognition and Digital Surveillance End Anonymous Protest?

Jason Dorrier
Jun 07, 2020
facial recognition and protest camera lens grey silver background

Share

It’s been almost two weeks since people first took to the streets in Minneapolis to protest police brutality following the deaths of George Floyd and Breonna Taylor at the hands of the police. Since then, demonstrations have gained momentum and spread to cities across the US and world.

Protest is a critical component of healthy democracy, a megaphone to grab the attention of those in power and compel change. In the US, it’s a constitutional right. But increasingly, law enforcement agencies are requesting protest footage and images, and the latest technologies are bringing with them the power to cast an ever wider surveillance net.

When San Francisco became the first US city to ban facial recognition in May 2019, perhaps legislators had something like the recent weeks in mind. The Bay Area is no stranger to civil disobedience and demonstration. But in bygone eras, anonymous protest was guaranteed by numbers. Just a face in the crowd actually meant something. Now, with smartphones, high-definition cameras, and powerful algorithms, anonymous protest may soon be a thing of the past.

While cities such as Oakland and Berkeley, California and Somerville and Brookline, Massachusetts have also banned facial recognition, other cities around the country still allow and have actively used facial recognition in law enforcement in the recent past.

Facial recognition algorithms identify people by searching for and matching them with labeled images in vast databases. These databases can be limited to mugshots or they can include far bigger groups, like DMV drivers license photos. In a particularly contentious example, startup Clearview AI composed a database of billions of images scraped from thousands of sites online without consent—including from the likes of Facebook and YouTube—and sold access to the database and facial recognition software to hundreds of law enforcement agencies.

A Buzzfeed News article last week said police departments in and around Minneapolis have used Clearview as recently as February. Another article in Medium’s science and tech publication One-Zero noted several other examples of recent use of facial recognition and requests from local police departments and the FBI for footage and images from the protests.

The capability is there and the systems have been used, but how law enforcement is employing facial recognition day-to-day and during the protests isn’t always clear.

Proponents argue that, used responsibly, the technology can be a valuable tool for more successfully locating people who've committed crimes. But its limitations have also been well-documented, not only in terms of overall accuracy, but also built-in bias, with some algorithms misidentifying people of color and women at much higher rates.

Absent clear rules and regulations, there’s potential for misuse, and the wider and deeper digital surveillance goes, the more it may provoke fear and freeze free expression.

“At a high level, these surveillance technologies should not be used on protesters,” Neema Singh Guliani, a senior legislative counsel for the ACLU, told BuzzFeed News. “The idea that you have groups of people that are raising legitimate concerns and now that could be subject to face recognition or surveillance, simply because they choose to protest, amplifies the overall concerns with law enforcement having this technology to begin with.”

In the US, no federal regulation governs facial recognition, leaving it to a patchwork of state and city laws. In a Wired op-ed last December, Susan Crawford argued this approach may have some benefits. The federal government may not be able to act anytime soon. In the meantime, local debates and experiments at the city and state level can inform and pressure wider regulation at the top.

Be Part of the Future

Sign up for SingularityHub's weekly briefing to receive top stories about groundbreaking technologies and visionary thinkers.

100% Free. No Spam. Unsubscribe any time.

But as those experiments grow in number, focusing just on facial recognition may miss the forest for the trees. There are other ways of surveilling groups from afar.

Smartphones broadcast an array of information that can be intercepted and recorded. And while we humans recognize people by their faces or voices, the algorithms enabling this kind of surveillance have no such limitations. Often they’re able to find patterns we can’t see and don’t necessarily even understand. Researchers have shown algorithms can identify people by their gait or their heartbeat (measured by laser at 200 yards). There may not be a database of gaits and heartbeats yet, but the technology is here.

The wider issue isn’t which piece of information is being used, but that it can be used so pervasively. Limiting how, when, why, and who uses it can help protect vital freedoms.

The question, as ever, is how do we bend technology to better serve us?

Crawford suggests requiring warrants for investigations and limiting real-time use. We can also restrict the storage of data, require deep auditing and public reporting of the technology’s use, punish misuse, and ban use in areas that are prone to discrimination.

That’s just for starters.

If we want a society that’s supple enough to respond to the voices of its people, we need to keep a close eye on how these technologies are deployed in the future.

Image credit: Bernard HermantUnsplash

Jason is editorial director of Singularity Hub. He researched and wrote about finance and economics before moving on to science and technology. He's curious about pretty much everything, but especially loves learning about and sharing big ideas and advances in artificial intelligence, computing, robotics, biotech, neuroscience, and space.

Related Articles

What is an AI agent—the next big thing in tech?

What Is an AI Agent? A Computer Scientist Explains the Next Wave of AI Tools

Brian O'Neill
The AI industry expects 2025 will be the year AI agents take over

Is 2025 the Year AI Agents Take Over? Industry Bets Billions on AI’s Killer App

Edd Gent
Tech and AI are disrupting the publishing industry

The Tech World Is ‘Disrupting’ Book Publishing. But Do We Want Effortless Art?

Julian Novitz
What is an AI agent—the next big thing in tech?
Artificial Intelligence

What Is an AI Agent? A Computer Scientist Explains the Next Wave of AI Tools

Brian O'Neill
The AI industry expects 2025 will be the year AI agents take over
Future

Is 2025 the Year AI Agents Take Over? Industry Bets Billions on AI’s Killer App

Edd Gent
Tech and AI are disrupting the publishing industry
Artificial Intelligence

The Tech World Is ‘Disrupting’ Book Publishing. But Do We Want Effortless Art?

Julian Novitz

What we’re reading

Be Part of the Future

Sign up for SingularityHub's weekly briefing to receive top stories about groundbreaking technologies and visionary thinkers.

100% Free. No Spam. Unsubscribe any time.

SingularityHub chronicles the technological frontier with coverage of the breakthroughs, players, and issues shaping the future.

Follow Us On Social

About

  • About Hub
  • About Singularity

Get in Touch

  • Contact Us
  • Pitch Us
  • Brand Partnerships

Legal

  • Privacy Policy
  • Terms of Use
© 2025 Singularity