Deepfakes: Faces Created by AI Now Look More Real Than Genuine Photos
Share
Even if you think you are good at analyzing faces, research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don’t exist.
A few years ago, a fake LinkedIn profile with a computer-generated profile picture made the news because it successfully connected with US officials and other influential individuals on the networking platform, for example. Counter-intelligence experts even say that spies routinely create phantom profiles with such pictures to home in on foreign targets over social media.
These deepfakes are becoming widespread in everyday culture which means people should be more aware of how they’re being used in marketing, advertising, and social media. The images are also being used for malicious purposes, such as political propaganda, espionage, and information warfare.
Making them involves something called a deep neural network, a computer system that mimics the way the brain learns. This is “trained” by exposing it to increasingly large data sets of real faces.
In fact, two deep neural networks are set against each other, competing to produce the most realistic images. As a result, the end products are dubbed GAN images, where GAN stands for "generative adversarial networks." The process generates novel images that are statistically indistinguishable from the training images.
In a study published in iScience, my colleagues and I showed that a failure to distinguish these artificial faces from the real thing has implications for our online behavior. Our research suggests the fake images may erode our trust in others and profoundly change the way we communicate online.
We found that people perceived GAN faces to be even more real-looking than genuine photos of actual people’s faces. While it’s not yet clear why this is, this finding does highlight recent advances in the technology used to generate artificial images.
And we also found an interesting link to attractiveness: faces that were rated as less attractive were also rated as more real. Less attractive faces might be considered more typical, and the typical face may be used as a reference against which all faces are evaluated. Therefore, these GAN faces would look more real because they are more similar to mental templates that people have built from everyday life.
But seeing these artificial faces as authentic may also have consequences for the general levels of trust we extend to a circle of unfamiliar people—a concept known as “social trust.”
We often read too much into the faces we see, and the first impressions we form guide our social interactions. In a second experiment that formed part of our latest study, we saw that people were more likely to trust information conveyed by faces they had previously judged to be real, even if they were artificially generated.
It is not surprising that people put more trust in faces they believe to be real. But we found that trust was eroded once people were informed about the potential presence of artificial faces in online interactions. They then showed lower levels of trust, overall—independently of whether the faces were real or not.
This outcome could be regarded as useful in some ways, because it made people more suspicious in an environment where fake users may operate. From another perspective, however, it may gradually erode the very nature of how we communicate.
In general, we tend to operate on a default assumption that other people are basically truthful and trustworthy. The growth in fake profiles and other artificial online content raises the question of how much their presence and our knowledge about them can alter this “truth default” state, eventually eroding social trust.
Be Part of the Future
Sign up for SingularityHub's weekly briefing to receive top stories about groundbreaking technologies and visionary thinkers.
Changing Our Defaults
The transition to a world where what’s real is indistinguishable from what’s not could also shift the cultural landscape from being primarily truthful to being primarily artificial and deceptive.
If we are regularly questioning the truthfulness of what we experience online, it might require us to re-deploy our mental effort from the processing of the messages themselves to the processing of the messenger’s identity. In other words, the widespread use of highly realistic, yet artificial, online content could require us to think differently—in ways we hadn’t expected to.
In psychology, we use a term called “reality monitoring” for how we correctly identify whether something is coming from the external world or from within our brains. The advance of technologies that can produce fake, yet highly realistic, faces, images, and video calls means reality monitoring must be based on information other than our own judgments. It also calls for a broader discussion of whether humankind can still afford to default to truth.
It’s crucial for people to be more critical when evaluating digital faces. This can include using reverse image searches to check whether photos are genuine, being wary of social media profiles with little personal information or a large number of followers, and being aware of the potential for deepfake technology to be used for nefarious purposes.
The next frontier for this area should be improved algorithms for detecting fake digital faces. These could then be embedded in social media platforms to help us distinguish the real from the fake when it comes to new connections’ faces.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: The faces in this article's banner image may look realistic, but they were generated by a computer. NVIDIA via thispersondoesnotexist.com
Manos Tsakiris is professor of psychology at the Department of Psychology, Royal Holloway, University of London, where he leads the Lab of Action and Body (LAB). His research is highly interdisciplinary and uses a wide range of methods to investigate the neurocognitive mechanisms that shape the experience of embodiment and social relatedness. He was awarded the 22nd EPS Prize Lecture from the Experimental Psychology Society, UK, the 2014 Mind and Brain Young Investigator Prize from the Center for Cognitive Science of Turin, Italy, and the 2016 Inaugural NOMIS Foundation Distinguished Scientist Award. Between 2016-2020 he led the interdisciplinary Body and Image in Arts & Science (BIAS) project at the Warburg Institute where he investigated the performative and political power of visual culture. Since 2017, he has been leading the INtheSELF ERC Consolidator project at Royal Holloway that investigates the role of interoception for self- and social-awareness. Since 2021, he has been the director of the interdisciplinary Centre for the Politics of Feelings.
Marc Goodman is a global strategist, author and consultant focused on the disruptive impact of advancing technologies on security, business and international affairs. Over the past twenty years, he has built his expertise in next generation security threats such as cyber crime, cyber terrorism and information warfare working with organizations such as Interpol, the United Nations, NATO, the Los Angeles Police Department and the U.S. Government. Marc frequently advises industry leaders, security executives and global policy makers on transnational cyber risk and intelligence and on the impact of technology in our lives. Marc provides audiences a front seat view into the digital underground on emerging technology, geopolitical, and security trends that will drastically redefine the markets in which they operate. Founder of the Future Crimes Institute, Marc’s current areas of research spotlight the security implications of disruptive technologies: artificial intelligence, the social data revolution, synthetic biology, virtual worlds, robotics, ubiquitous computing and location-based services. As Singularity University’s Faculty Chair for Policy, Law & Ethics, Marc focuses on the consequences, implications and virtues of exponential technologies in our lives. Moreover, he serves as SU’s Global Security Advisor, examining the use of advanced science and technology to address humanity’s grand challenges. For more than a decade, Marc worked with INTERPOL, the International Criminal Police Organization, having chaired numerous INTERPOL expert groups on emerging security threats. In that capacity, Marc trained police and security forces throughout the Middle East, Africa, Europe, Latin America and Asia and has worked in 70 countries around the world. In addition, he advises NATO as a subject matter expert on Critical Cyber Infrastructure Protection and the UN Counterterrorism Task Force on terrorist use of the Internet. Marc began his career as a serving police officer and uses his experience in global security and technology to advise clients on the next generation security threats which have emerged in our rapidly changing world. Marc Goodman is the author of Future Crimes: Everything Is Connected, Everyone is Vulnerable and What We Can Do About It from Random House/Doubleday. Future Crimes is a New York Times, Wall Street Journal and USA Today Best-Seller, was selected as Amazon Best Business Book of 2015 and has been named one of The Washington Post Top Ten Best Books of 2015. Marc has authored more than one dozen journal articles and ten book chapters on on a variety of emerging security threats, including cybercrime, bio-security and critical infrastructure protection. Goodman is also a frequent contributor to many publications, including: Harvard Business Review, The Atlantic, Forbes, The Economist, Harvard Journal of Law & Technology, Oxford University Press, Jane’s Intelligence Review, the American Bar Association, the FBI Law Enforcement Bulletin, the Institute of Electronic and Electronics Engineers (IEEE). He has been interviewed by the BBC, CNN, Fox News, NPR, NBC, PBS, The Washington Post, LA Times, Le Monde, Larry King, Tim Ferriss, and many others. A sought-after speaker, Marc Goodman TED talk A Vision of Crimes in the Future, has been viewed more than 1.2 million times and translated into 26 languages. Marc holds a Master of Public Administration from Harvard University and a Master of Science in the Management of Information Systems from the London School of Economics. In addition, he has serves as a Fellow at Stanford University’s Center for International Security and Cooperation and is a Distinguished Visiting Scholar at Stanford’s MediaX Laboratory.
Related Articles
Inside VillageOS: A ‘SimCity’-Like Tool for Regenerative Living Spaces
Is 2025 the Year AI Agents Take Over? Industry Bets Billions on AI’s Killer App
Automated Cyborg Cockroach Factory Could Churn Out a Bug a Minute for Search and Rescue
What we’re reading