Google Scrambles to Catch Up in the Wake of OpenAI’s ChatGPT

Google is one of the biggest companies on Earth. Google’s search engine is the front door to the internet. And according to recent reports, Google is scrambling.

Late last year, OpenAI, an artificial intelligence company at the forefront of the field, released ChatGPT. Alongside Elon Musk’s Twitter acquisition and fallout from FTX’s crypto implosion, breathless chatter about ChatGPT and generative AI has been ubiquitous.

The chatbot, which was born from an upgrade to OpenAI’s GPT-3 algorithm, is like a futuristic Q&A machine. Ask any question, and it responds in plain language. Sometimes it gets the facts straight. Sometimes not so much. Still, ChatGPT took the world by storm thanks to the fluidity of its prose, its simple interface, and a mainstream launch.

When a new technology hits public consciousness, people try to sort out its impact. Between debates about how bots like ChatGPT will impact everything from academics to journalism, not a few folks have suggested ChatGPT may end Google’s reign in search. Who wants to hunt down information fragmented across a list of web pages when you could get a coherent, seemingly authoritative, answer in an instant?

In December, The New York Times reported Google was taking the prospect seriously, with management declaring a “code red” internally. This week, as Google announced layoffs, CEO Sundar Pichai told employees the company will sharpen its focus on AI. The NYT also reported Google founders, Larry Page and Sergey Brin, are now involved in efforts to streamline development of AI products. The worry is that they’ve lost a step to the competition.

If true, it isn’t due to a lack of ability or vision. Google’s no slouch at AI.

The technology here—a flavor of deep learning model called a transformer—was developed at Google in 2017. The company already has its own versions of all the flashy generative AI models, from images (Imagen) to text (LaMDA). Indeed, in 2021, Google researchers published a paper pondering how large language models (like ChatGPT) might radically upend search in the future.

“What if we got rid of the notion of the index altogether and replaced it with a pre-trained model that efficiently and effectively encodes all of the information contained in the corpus?” Donald Metzler, a Google researcher, and coauthors wrote at the time. “What if the distinction between retrieval and ranking went away and instead there was a single response generation phase?” This should sound familiar.

Whereas smaller organizations opened access to their algorithms more aggressively, however, Google largely kept its work under wraps. Offering only small, tightly controlled demos to limited groups of people, it deemed the tech too risky and error-prone for wider release just yet. Damage to its brand and reputation was a chief concern.

Now, sweating it out under the bright lights of ChatGPT, the company is planning to release some 20 AI-powered products later this year, according to the NYT. These will encompass all the top generative AI applications, like image, text, and code generation—and they’ll test a ChatGPT-like bot in search.

But is the technology ready to go from splashy demo tested by millions to a crucial tool trusted by billions? In their 2021 paper, the Google researchers suggested an ideal chatbot search assistant would be authoritative, transparent, unbiased, accessible, and contain diverse perspectives. Acing each of those categories is still a stretch for even the most advanced large language models.

Trust matters with search in particular. When it serves up a list of web pages today, Google can blame content creators for poor quality and vow to serve better results in the future. With an AI chatbot, it is the content creator.

As Fast Company’s Harry McCracken pointed out not long ago, if ChatGPT can’t get its facts straight, nothing else matters. “Whenever I chat with ChatGPT about any subject I know much about, such as the history of animation, I’m most struck by how deeply untrustworthy it is,” McCracken wrote. “If a rogue software engineer set out to poison our shared corpus of knowledge by generating convincing-sounding misinformation in bulk, the end result might look something like this.”

Google is clearly aware of the risk. And whatever implementation in search it unveils this year, it still aims to prioritize “getting the facts right, ensuring safety, and getting rid of misinformation.” How it will accomplish these goals is an open question. Just in terms of “ensuring safety,” for example, Google’s algorithms underperform OpenAI’s on metrics of toxicity, according to the NYT. But a Time investigation this week reported that OpenAI had to turn, at least in part, to human workers in Kenya, paid a pittance, to flag and scrub the most toxic data from ChatGPT.

Other questions, including about the copyright of works used to train generative algorithms, remain similarly unresolved. Two copyright lawsuits, one by Getty Images and one by a group of artists, were filed earlier this week.

Still, the competitive landscape, it seems, is compelling Google, Microsoft—who has invested big in OpenAI and is already incorporating its algorithms into products—and others to go full steam ahead in an effort to minimize the risk of being left behind. We’ll have to wait and see what an implementation in search looks like. Maybe it’ll be in beta with a disclaimer for awhile, or maybe, as the year progresses, the tech will again surprise us with breakthroughs.

In either case, while generative AI will play a role in search, how much of a role and how soon is less settled. As to whether Google loses its perch? OpenAI’s CEO, Sam Altman, pushed back against the hype this week.

“I think whenever someone talks about a technology being the end of some other giant company, it’s usually wrong,” Altman said in response to a question about the likelihood ChatGPT dethrones Google. “I think people forget they get to make a countermove here, and they’re like pretty smart, pretty competent. I do think there’s a change for search that will probably come at some point—but not as dramatically as people think in the short term.”

Image Credit: D21_Gallery / Unsplash

Jason Dorrier
Jason Dorrier
Jason is editorial director of Singularity Hub. He researched and wrote about finance and economics before moving on to science and technology. He's curious about pretty much everything, but especially loves learning about and sharing big ideas and advances in artificial intelligence, computing, robotics, biotech, neuroscience, and space.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured