Google Just Released Two Open AI Models That Can Run on Laptops

Last year, Google united its AI units in Google DeepMind and said it planned to speed up product development in an effort to catch up to the likes of Microsoft and OpenAI. The stream of releases in the last few weeks follows through on that promise.

Two weeks ago, Google announced the release of its most powerful AI to date, Gemini Ultra, and reorganized its AI offerings, including its Bard chatbot, under the Gemini brand. A week later, they introduced Gemini Pro 1.5, an updated Pro model that largely matches Gemini Ultra’s performance and also includes an enormous context window—the amount of data you can prompt it with—for text, images, and audio.

Today, the company announced two new models. Going by the name Gemma, the models are much smaller than Gemini Ultra, weighing in at 2 and 7 billion parameters respectively. Google said the models are strictly text-based—as opposed to multimodal models that are trained on a variety of data, including text, images, and audio—outperform similarly sized models, and can be run on a laptop, desktop, or in the cloud. Before training, Google stripped datasets of sensitive data like personal information. They also fine-tuned and stress-tested the trained models pre-release to minimize unwanted behavior.

The models were built and trained with the same technology used in Gemini, Google said, but in contrast, they’re being released under an open license.

That doesn’t mean they’re open-source. Rather, the company is making the model weights available so developers can customize and fine-tune them. They’re also releasing developer tools to help keep applications safe and make them compatible with major AI frameworks and platforms. Google says the models can be employed for responsible commercial usage and distribution—as defined in the terms of use—for organizations of any size.

If Gemini is aimed at OpenAI and Microsoft, Gemma likely has Meta in mind. Meta is championing a more open model for AI releases, most notably for its Llama 2 large language model. Though sometimes confused for an open-source model, Meta has not released the dataset or code used to train Llama 2. Other more open models, like the Allen Institute for AI’s (AI2) recent OLMo models, do include training data and code. Google’s Gemma release is more akin to Llama 2 than OLMo.

“[Open models have] become pretty pervasive now in the industry,” Google’s Jeanine Banks said in a press briefing. “And it often refers to open weights models, where there is wide access for developers and researchers to customize and fine-tune models but, at the same time, the terms of use—things like redistribution, as well as ownership of those variants that are developed—vary based on the model’s own specific terms of use. And so we see some difference between what we would traditionally refer to as open source and we decided that it made the most sense to refer to our Gemma models as open models.”

Still, Llama 2 has been influential in the developer community, and open models from the likes of French startup, Mistral, and others are pushing performance toward state-of-the-art closed models, like OpenAI’s GPT-4. Open models may make more sense in enterprise contexts, where developers can better customize them. They’re also invaluable for AI researchers working on a budget. Google wants to support such research with Google Cloud credits. Researchers can apply for up to $500,000 in credits toward larger projects.

Just how open AI should be is still a matter of debate in the industry.

Proponents of a more open ecosystem believe the benefits outweigh the risks. An open community, they say, can not only innovate at scale, but also better understand, reveal, and solve problems as they emerge. OpenAI and others have argued for a more closed approach, contending the more powerful the model, the more dangerous it could be out in the wild. A middle road might allow an open AI ecosystem but more tightly regulate it.

What’s clear is both closed and open AI are moving at a quick pace. We can expect more innovation from big companies and open communities as the year progresses.

Image Credit: Google

Jason Dorrier
Jason Dorrier
Jason is editorial director of Singularity Hub. He researched and wrote about finance and economics before moving on to science and technology. He's curious about pretty much everything, but especially loves learning about and sharing big ideas and advances in artificial intelligence, computing, robotics, biotech, neuroscience, and space.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured