Home Blog Page 2

Even as the Fusion Era Comes Into View—We’re Still in the Steam Age

0

Steam locomotives clattering along railway tracks. Paddle steamers churning down the Murray. Dreadnought battleships powered by steam engines.

Many of us think the age of steam has ended. But while the steam engine has been superseded by internal combustion engines and now electric motors, the modern world still relies on steam. Almost all thermal power plants, from coal to nuclear, must have steam to function. (Gas plants usually do not).

But why? It’s because of something we discovered millennia ago. In the first century CE, the ancient Greeks invented the aeolipile—a steam turbine. Heat turned water into steam, and steam has a very useful property: It’s an easy-to-make gas that can push.

This simple fact means that even as the dream of fusion power creeps closer, we will still be in the steam age. The first commercial fusion plant will rely on cutting-edge technology able to contain plasma far hotter than the sun’s core—but it will still be wedded to a humble steam turbine converting heat to movement to electricity.

inside a fusion torus
Even high-tech fusion plants will use steam to produce electricity. Image Credit: EUROfusion/Wikimedia Commons, CC BY

Why Are We Still Reliant on Steam?

Boiling water takes a significant amount of energy, the highest by far of the common liquids we’re familiar with. Water takes about 2.5 times more energy to evaporate than ethanol does and 60 percent more than ammonia liquids.

Why do we use steam rather than other gases? Water is cheap, nontoxic, and easy to transform from liquid to energetic gas before condensing back to liquid for use again and again.

Steam has lasted this long because we have an abundance of water, covering 71 percent of Earth’s surface, and water is a useful way to convert thermal energy (heat) to mechanical energy (movement) to electrical energy (electricity). We seek electricity because it can be easily transmitted and can be used to do work for us in many areas.

When water is turned to steam inside a closed container, it expands hugely and increases the pressure. High pressure steam can store huge amounts of heat, as can any gas. If given an outlet, the steam will surge through it with high flow rates. Put a turbine in its exit path and the force of the escaping steam will spin the turbine’s blades. Electromagnets convert this mechanical movement to electricity. The steam condenses back to water and the process starts again.

Steam engines used coal to heat water to create steam to drive the engine. Nuclear fission splits atoms to make heat to boil water. Nuclear fusion will force heavy isotopes of hydrogen (deuterium and tritium) to fuse into helium-3 atoms and create even more heat—to boil water to make steam to drive turbines to make electricity.

If you looked only at the end process in most thermal power plants—coal, diesel, nuclear fission, or even nuclear fusion—you would see the old technology of steam taken as far as it can be taken.

The steam turbines driving the large electrical alternators which produce 60 percent of the world’s electricity are things of beauty. Hundreds of years of metallurgical technology, design and intricate manufacturing has all but perfected the steam turbine.

Will we keep using steam? New technologies produce electricity without using steam at all. Solar panels rely on incoming photons hitting electrons in silicon and creating a charge, while wind turbines operate like steam turbines except with wind blowing the turbine, not steam. Some forms of energy storage, such as pumped hydro, use turbines but for liquid water, not steam, while batteries use no steam at all.

These technologies are rapidly becoming important sources of energy and storage. But steam isn’t going away. If we use thermal power plants, we’ll likely still be using steam.

Why Can’t We Just Convert Heat to Electricity?

You might wonder why we need so many steps. Why can’t we convert heat directly to electricity?

It is possible. Thermo-electric devices are already in use in satellites and space probes.

Built from special alloys such as lead-tellurium, these devices rely on a temperature gap between hot and cold junctions between these materials. The greater the temperature difference, the greater voltage they can generate.

The reason these devices aren’t everywhere is they only produce direct current (DC) at low voltages and are between 16–22 percent efficient at converting heat to electricity. By contrast, state of the art thermal power plants are up to 46 percent efficient.

If we wanted to run a society on these heat-conversion engines, we’d need large arrays of these devices to produce high enough DC current and then use inverters and transformers to convert it to the alternating current we’re used to. So while you might avoid steam, you end up having to add new conversions to make the electricity useful.

There are other ways to turn heat into electricity. High-temperature solid-oxide fuel cells have been under development for decades. These run hot—between 500–1,000 degrees celsius—and can burn hydrogen or methanol (without an actual flame) to produce DC electricity.

These fuel cells are up to 60 percent efficient and potentially even higher. While promising, these fuel cells are not yet ready for prime time. They have expensive catalysts and short lifespans due to the intense heat. But progress is being made.

Until technologies like these mature, we’re stuck with steam as a way to convert heat to electricity. That’s not so bad—steam works.

When you see a steam locomotive rattle past, you might think it’s a quaint technology of the past. But our civilization still relies very heavily on steam. If fusion power arrives, steam will help power the future too. The steam age never really ended.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Siemens Pressebild via Wikimedia Commons

An AI-Designed Drug Is Moving Toward Approval at an Impressive Clip

0

For the first time, an AI-designed drug is in the second phase of clinical trials. Recently, the team behind the drug published a paper outlining how they developed it so fast.

Made by Insilico Medicine, a biotechnology company based in New York and Hong Kong, the drug candidate targets idiopathic pulmonary fibrosis, a deadly disease that causes the lungs to harden and scar over time. The damage is irreversible, making it increasingly difficult to breathe. The disease doesn’t have known triggers. Scientists have struggled to find proteins or molecules that may be behind the disease as potential targets for treatment.

For medicinal chemists, developing a cure for the disease is a nightmare. For Dr. Alex Zhavoronkov, founder and CEO of Insilico Medicine, the challenge represents a potential proof of concept that could transform the drug discovery process using AI—and provide hope to millions of people struggling with the deadly disease.

The drug, dubbed ISM018_055, had AI infused throughout its entire development process. With Pharma.AI, the company’s drug design platform, the team used multiple AI methods to find a potential target for the disease and then generated promising drug candidates.

ISM018_055 stood out for its ability to reduce scarring in cells and in animal models. Last year, the drug completed a Phase I clinical trial in 126 healthy volunteers in New Zealand and China to test its safety and passed with flying colors. The team has now described their entire platform and released their data in Nature Biotechnology.

The timeline for drug discovery, from finding a target to completion of Phase I clinical trials, is around seven years. With AI, Insilico completed these steps in roughly half that time.

“Early on I saw the potential to use AI to speed and improve the drug discovery process from end to end,” Zhavoronkov told Singularity Hub. The concept was initially met with skepticism from the drug discovery community. With ISM018_055, the team is putting their AI platform “to the ultimate test—discover a novel target, design a new molecule from scratch to inhibit that target, test it, and bring it all the way into clinical trials with patients.”

The AI-designed drug has mountains to climb before it reaches drugstores. For now, it’s only shown to be safe in healthy volunteers. The company launched Phase II clinical trials last summer, which will further investigate the drug’s safety and begin to test its efficacy in people with the disease.

“Lots of companies are working on AI to improve different steps in drug discovery,” said Dr. Michael Levitt, a Nobel laureate in chemistry, who was not involve in the work. “Insilico…not only identified a novel target, but also accelerated the whole early drug discovery process, and they’ve quite successfully validated their AI methods.”

The work is so “exciting to me,” he said.

The Long Game

The first stages of drug discovery are a bit like high-stakes gambling.

Scientists pick a target in the body that likely causes a disease and then painstakingly design chemicals to interfere with the target. The candidates are then scrutinized for a myriad of preferable properties. For example, can it be absorbed as a pill or with an inhaler rather than an injection? Can the drug reach the target at high enough levels to block scarring? Can it be easily broken down and eliminated by the kidneys? Ultimately, is it safe?

The entire validation process, from discovery to approval, can take more than a decade and billions of dollars. Most of the time, the gamble doesn’t pay off. Roughly 90 percent of initially promising drug candidates fail in clinical trials. Even more candidates don’t make it that far.

The first stage—finding the target for a potential drug—is essential. But the process is especially hard for diseases without a known cause or for complex health problems such as cancer and age-related disorders. With AI, Zhavoronkov wondered if it was possible to speed up the journey. In the past decade, the team built several “AI scientists” to help their human collaborators.

The first, PandaOmics, uses multiple algorithms to zero in on potential targets in large datasets—for example, genetic or protein maps and data from clinical trials. For idiopathic pulmonary fibrosis, the team trained the tool on data from tissue samples of patients with the disease and added text from a universe of online scientific publications and grants in the field.

In other words, PandaOmics behaved like a scientist. It “read” and synthesized existing knowledge as background and incorporated clinical trial data to generate a list of potential targets for the disease with a focus on novelty.

A protein called TNIK emerged as the best candidate. Although not previously linked to idiopathic pulmonary fibrosis, TNIK had been a target associated with multiple “hallmarks of aging”—the myriad broken down genetic and molecular processes that accumulate as we get older.

With a potential target in hand, another AI engine, called Chemistry42, used generative algorithms to find chemicals that could latch onto TNIK. This type AI generates text responses in popular programs like ChatGPT, but it can also dream up new medicines.

“Generative AI as a technology has been around since 2020, but now we are in a pivotal moment of both broad commercial awareness and breakthrough achievements,” said Zhavoronkov.

With expert input from human medicinal chemists, the team eventually found their drug candidate: ISM018_055. The drug was safe and effective at reducing scarring in the lungs in animal models. Surprisingly, it also protected the skin and kidneys from fibrosis, which often occurs during aging.

In late 2021, the team launched a clinical trial in Australia testing the drug’s safety. Others soon followed in New Zealand and China. The results in healthy volunteers were promising. The AI-designed drug was readily absorbed by the lungs when taken as a pill and then broken down and eliminated from the body without notable side effects.

It’s a proof of concept for AI-based drug discovery. “We are able to demonstrate beyond a doubt that this method of finding and developing new treatments works,” said Zhavoronkov.

First in Class

The AI-designed drug moved on to the next stage of clinical trials, Phase II, in both the US and China last summer. The drug is being tested in people with the disease using the gold standard of clinical trials: randomized, double-blind, and with a placebo.

“Many people say they are doing AI for drug discovery,” said Dr. Alán Aspuru-Guzik at the University of Toronto, who was not involved in the new study. “This, to my knowledge, is the first AI-generated drug in stage II clinical trials. A true milestone for the community and for Insilico.”

The drug’s success still isn’t a given. Drug candidates often fail during clinical trials. But if successful, it could potentially have a wider reach. Fibrosis readily occurs in multiple organs as we age, eventually grinding normal organ functions to a halt.

“We wanted to identify a target that was highly implicated in both disease and aging, and fibrosis…is a major hallmark of aging,” said Zhavoronkov. The AI platform found one of the most promising “dual-purpose targets related to anti-fibrosis and aging,” which may not only save lives in people with idiopathic pulmonary fibrosis but also potentially slow aging for us all.

To Dr. Christoph Kuppe at the RWTH Aachen who was not involved in the work, the study is a “landmark” that could reshape the trajectory of drug discovery.

With ISM018_055 currently undergoing Phase II trials, Zhavoronkov is envisioning a future where AI and scientists collaborate to speed up new treatments. “We hope this [work] will drive more confidence, and more partnerships, and serve to convince any remaining skeptics of the value of AI-driven drug discovery,” he said.

Image Credit: Insilico

This Week’s Awesome Tech Stories From Around the Web (Through March 16)

ARTIFICIAL INTELLIGENCE

Cognition Emerges From Stealth to Launch AI Software Engineer Devin
Shubham Sharma | VentureBeat
“The human user simply types a natural language prompt into Devin’s chatbot style interface, and the AI software engineer takes it from there, developing a detailed, step-by-step plan to tackle the problem. It then begins the project using its developer tools, just like how a human would use them, writing its own code, fixing issues, testing and reporting on its progress in real-time, allowing the user to keep an eye on everything as it works.”

ROBOTICS

Covariant Announces a Universal AI Platform for Robots
Evan Ackerman | IEEE Spectrum
“[On Monday, Covariant announced] RFM-1, which the company describes as a robotics foundation model that gives robots the ‘human-like ability to reason.’ That’s from the press release, and while I wouldn’t necessarily read too much into ‘human-like’ or ‘reason,’ what Covariant has going on here is pretty cool. …’Our existing system is already good enough to do very fast, very variable pick and place,’ says Covariant co-founder Pieter Abbeel. ‘But we’re now taking it quite a bit further. Any task, any embodiment—that’s the long-term vision. Robotics foundation models powering billions of robots across the world.'”

COMPUTING

Cerebras Unveils Its Next Waferscale AI Chip
Samuel K. Moore | IEEE Spectrum
“Cerebras says its next generation of waferscale AI chips can do double the performance of the previous generation while consuming the same amount of power. The Wafer Scale Engine 3 (WSE-3) contains 4 trillion transistors, a more than 50 percent increase over the previous generation thanks to the use of newer chipmaking technology. The company says it will use the WSE-3 in a new generation of AI computers, which are now being installed in a datacenter in Dallas to form a supercomputer capable of 8 exaflops (8 billion billion floating point operations per second).”

SPACE

SpaceX Celebrates Major Progress on the Third Flight of Starship
Stephen Clarke | Ars Technica
“SpaceX’s new-generation Starship rocket, the most powerful and largest launcher ever built, flew halfway around the world following liftoff from South Texas on Thursday, accomplishing a key demonstration of its ability to carry heavyweight payloads into low-Earth orbit. The successful launch builds on two Starship test flights last year that achieved some, but not all, of their objectives and appears to put the privately funded rocket program on course to begin launching satellites, allowing SpaceX to ramp up the already-blistering pace of Starlink deployments.”

AUTOMATION

This Self-Driving Startup Is Using Generative AI to Predict Traffic
James O’Donnell | MIT Technology Review
“The new system, called Copilot4D, was trained on troves of data from lidar sensors, which use light to sense how far away objects are. If you prompt the model with a situation, like a driver recklessly merging onto a highway at high speed, it predicts how the surrounding vehicles will move, then generates a lidar representation of 5 to 10 seconds into the future (showing a pileup, perhaps).”

TRANSPORTATION

Electric Cars Are Still Not Good Enough
Andrew Moseman | The Atlantic
“The next phase, when electric cars leap from early adoption to mass adoption, depends on the people [David] Rapson calls ‘the pragmatists’: Americans who will buy whichever car they deem best and who are waiting for their worries about price, range, and charging to be allayed before they go electric. The current slate of EVs isn’t winning them over.”

SPACE

Mining Helium-3 on the Moon Has Been Talked About Forever—Now a Company Will Try
Eric Berger | Ars Technica
“Two of Blue Origin’s earliest employees, former President Rob Meyerson and Chief Architect Gary Lai, have started a company that seeks to extract helium-3 from the lunar surface, return it to Earth, and sell it for applications here. …The present lunar rush is rather like a California gold rush without the gold. By harvesting helium-3, which is rare and limited in supply on Earth, Interlune could help change that calculus by deriving value from resources on the moon. But many questions about the approach remain.”

ARTIFICIAL INTELLIGENCE

What Happens When ChatGPT Tries to Solve 50,000 Trolley Problems?
Fintan Burke | Ars Technica
“Autonomous driving startups are now experimenting with AI chatbot assistants, including one self-driving system that will use one to explain its driving decisions. Beyond announcing red lights and turn signals, the large language models (LLMs) powering these chatbots may ultimately need to make moral decisions, like prioritizing passengers’ or pedestrian’s safety. But is the tech ready? Kazuhiro Takemoto, a researcher at the Kyushu Institute of Technology in Japan, wanted to check if chatbots could make the same moral decisions when driving as humans.”

FUTURE OF FOOD

States Are Lining Up to Outlaw Lab-Grown Meat
Matt Reynolds | Wired
“As well as the Florida bill, there is also proposed legislation to ban cultivated meat in Alabama, Arizona, Kentucky, and Tennessee. If all of those bills pass—an admittedly unlikely prospect—then some 46 million Americans will be cut off from accessing a form of meat that many hope will be significantly kinder to the planet and animals.”

COMPUTING

Physicists Finally Find a Problem Only Quantum Computers Can Do
Lakshmi Chandrasekaran | Quanta
“Quantum computers are poised to become computational superpowers, but researchers have long sought a viable problem that confers a quantum advantage—something only a quantum computer can solve. Only then, they argue, will the technology finally be seen as essential. They’ve been looking for decades. …Now, a team of physicists including [John] Preskill may have found the best candidate yet for quantum advantage.”

Image Credit: SpaceX

This Gene Increases the Risk of Alzheimer’s. Scientists Finally Know Why

0

At the turn of the 20th century, Dr. Alois Alzheimer noticed peculiar changes in a freshly removed brain. The brain had belonged to a 50-year-old woman who gradually lost her memory and struggled with sleep, increased aggression, and eventually paranoia.

Under the microscope, her brain was littered with tangles of protein clumps. Curiously, shiny bubbles of fat had also accumulated inside brain cells, but they weren’t neurons—the brain cells that spark with electricity and underlie our thoughts and memories. Instead, the fatty pouches built up in supporting brain cells called glia.

Scientists have long thought toxic protein clusters lead to or exacerbate Alzheimer’s disease. Decades of work aimed at breaking down these clumps has mostly failed—earning the endeavor the nickname “graveyard of dreams.” There has been a recent win. In early 2023, the US Food and Drug Administration approved an Alzheimer’s drug that slightly slowed cognitive decline by inhibiting protein clumps, although amid much controversy over its safety.

A growing number of experts are exploring other ways to battle the mind-eating disorder. Stanford’s Dr. Tony Wyss-Coray thinks an answer may come from the original source; Alois Alzheimer’s first descriptions of fatty bubbles inside glia cells—but with a modern genetic twist.

In a new study, the team targeted fatty bubbles as a potential driver of Alzheimer’s disease. Using donated brain tissue from people with the disorder, they pinpointed one cell type that’s especially vulnerable to the fatty deposits—microglia, the brain’s main immune cells.

Not all people with Alzheimer’s had overly fatty microglia. Those who did harbored a specific variant of a gene, called APOE4. Scientists have long known that APOE4 increases the risk of Alzheimer’s, but the reason why has remained a mystery.

The fatty bubbles may be the answer. Lab-made microglia cells from people with APOE4 rapidly accumulated bubbles and spewed them onto neighboring cells. When treated with liquids containing the bubbles, healthy neurons developed classical signs of Alzheimer’s disease.

The results uncover a new link between genetic risk factors for Alzheimer’s and fatty bubbles in the brain’s immune cells, the team wrote in their paper.

“This opens up a new avenue for therapeutic development,” the University of Pennsylvania’s Dr. Michal Haney, who was not involved in the study, told New Scientist.

The Forgetting Gene

Two types of proteins have been at the heart of Alzheimer’s research.

One is beta-amyloid. These proteins start as wispy strands, but gradually they grasp each other and form large clumps that gunk up the outside of neurons. Another culprit is tau. Normally innocuous, tau eventually forms tangles inside neurons that can’t be easily broken down.

Together, the proteins inhibit normal neuron functions. Dissolving or blocking these clumps should, in theory, restore neuronal health, but most treatments have shown minimal or no improvement to memory or cognition in clinical trials.

Meanwhile, genome-wide studies have found a gene called APOE is a genetic regulator of the disease. It comes in multiple variants: APOE2 is protective, whereas APOE4 increases disease risk up to 12-fold—earning its nickname the “forgetting gene.” Studies are underway to genetically deliver protective variants that wipe out the negative consequences of APOE4. Researchers hope this approach can halt memory or cognitive deficits before they occur.

But why are some APOE variants protective, while others are not? Fatty bubbles may be to blame.

Cellular Gastronomy

Most cells contain little bubbles of fat. Dubbed “lipid droplets,” they’re an essential energy source. The bubbles interact with other cellular components to control a cell’s metabolism.

Each bubble has a core of intricately arranged fats surrounded by a flexible molecular “cling wrap.” Lipid droplets can rapidly grow or shrink in size to buffer toxic levels of fatty molecules in the cell and direct immune responses against infections in the brain.

APOE is a major gene regulating these lipid droplets. The new study asked if fatty deposits are the reason APOE4 increases the risk of Alzheimer’s disease.

The team first mapped all proteins in different types of cells in brain tissues donated from people with Alzheimer’s. Some had the dangerous APOE4 variant; others had APOE3, which doesn’t increase disease risk. In all, the team analyzed roughly 100,000 cells—including neurons and myriad other brain cell types, such as the immune cell microglia.

Comparing results from the two genetic variants, the team found a stark difference. People with APOE4 had far higher levels of an enzyme that generates lipid droplets, but only in microglia. The droplets collected around the nucleus—which houses our genetic material—similar to Alois Alzheimer’s first description of fatty deposits.

The lipid droplets also increased the levels of dangerous proteins in Alzheimer’s disease, including amyloid and tau. In a standard cognitive test in mice, more lipid droplets correlated to worse performance. Like humans, mice with the APOE4 variant had far more fatty microglia than those with the “neutral” APOE3, and the immune cells had higher levels of inflammation.

Although the droplets accumulated inside microglia, they also readily harmed nearby neurons.

In a test, the team transformed skin cells from people with APOE4 into a stem cell-like state. With a specific dose of chemicals, they nudged the cells to develop into neurons with the APOE4 genotype.

They then gathered secretions from microglia with either high or low levels of lipid droplets and treated the engineered neurons with the liquids. Secretions with low levels of fatty bubbles didn’t harm the cells. But neurons given doses high in lipid droplets rapidly changed tau—a classic Alzheimer’s protein—into its disease-causing form. Eventually, these neurons died off.

This isn’t the first time fatty bubbles have been linked to Alzheimer’s disease, but we now have a clearer understanding of why. Lipid droplets accumulate in microglia with APOE4, transforming these cells into an inflammatory state that harms nearby neurons—potentially leading to their death. The study adds to recent work highlighting irregular immune responses in the brain as a major driver of Alzheimer’s and other neurodegenerative diseases.

It’s yet unclear whether lowering lipid droplet levels can relieve Alzheimer’s symptoms in people with APOE4, but the team is eager to try.

One route is to genetically inhibit the enzyme that creates the lipid droplets in APOE4 microglia. Another option is to use drugs to activate the cell’s built-in disposal system—basically, a bubble full of acid—to break down the fatty bubbles. It’s a well-known strategy that’s previously been used to destroy toxic protein clumps, but it could be reworked to clear out lipid droplets.

“Our findings suggest a link between genetic risk factors for Alzheimer’s disease with microglial lipid droplet accumulation…potentially providing therapeutic strategies for Alzheimer’s disease,” wrote the team in their paper.

As a next step, they’re exploring whether the protective APOE2 variant can thwart lipid droplet accumulation in microglia, and perhaps, eventually save the brain’s memory and cognition.

Image Credit: Richard Watts, PhD, University of Vermont and Fair Neuroimaging Lab, Oregon Health and Science University

Watch an AI Robot Dog Rock an Agility Course It’s Never Seen Before

0

Robots doing feats of acrobatics might be a great marketing trick, but typically these displays are highly choreographed and painstakingly programmed. Now researchers have trained a four-legged AI robot to tackle complex, previously unseen obstacle courses in real-world conditions.

Creating agile robots is challenging due to the inherent complexity of the real world, the limited amount of data robots can collect about it, and the speed at which decisions need to be made to carry out dynamic movements.

Companies like Boston Dynamics have regularly released videos of their robots doing everything from parkour to dance routines. But as impressive as these feats are, they typically involve humans painstakingly programming every step or training on the same highly controlled environments over and over.

This process seriously limits the ability to transfer skills to the real world. But now, researchers from ETH Zurich in Switzerland have used machine learning to teach their robot dog ANYmal a suite of basic locomotive skills that it can then string together to tackle a wide variety of challenging obstacle courses, both indoors and outdoors, at speeds of up to 4.5 miles per hour.

“The proposed approach allows the robot to move with unprecedented agility,” write the authors of a new paper on the research in Science Robotics. “It can now evolve in complex scenes where it must climb and jump on large obstacles while selecting a non-trivial path toward its target location.”

To create a flexible yet capable system, the researchers broke the problem down into three parts and assigned a neural network to each. First, they created a perception module that takes input from cameras and lidar and uses them to build a picture of the terrain and any obstacles in it.

They combined this with a locomotion module that had learned a catalog of skills designed to help it traverse different kinds of obstacles, including jumping, climbing up, climbing down, and crouching. Finally, they merged these modules with a navigation module that could chart a course through a series of obstacles and decide which skills to invoke to clear them.

“We replace the standard software of most robots with neural networks,” Nikita Rudin, one of the paper’s authors, an engineer at Nvidia, and a PhD student at ETH Zurich, told New Scientist. “This allows the robot to achieve behaviors that were not possible otherwise.”

One of the most impressive aspects of the research is the fact the robot was trained in simulation. A major bottleneck in robotics is gathering enough real-world data for robots to learn from. Simulations can help gather data much more quickly by putting many virtual robots through trials in parallel and at much greater speed than is possible with physical robots.

But translating skills learned in simulation to the real world is tricky due to the inevitable gap between simple virtual worlds and the hugely complex physical world. Training a robotic system that can operate autonomously in unseen environments both indoors and outdoors is a major achievement.

The training process relied purely on reinforcement learning—effectively trial and error—rather than human demonstrations, which allowed the researchers to train the AI model on a very large number of randomized scenarios rather than having to label each manually.

Another impressive feature is that everything runs on chips installed in the robot, rather than relying on external computers. And as well as being able to tackle a variety of different scenarios, the researchers showed ANYmal could recover from falls or slips to complete the obstacle course.

The researchers say the system’s speed and adaptability suggest robots trained in this way could one day be used for search and rescue missions in unpredictable, hard-to-navigate environments like rubble and collapsed buildings.

The approach does have limitations though. The system was trained to deal with specific kinds of obstacles, even if they varied in size and configuration. Getting it to work in more unstructured environments would require much more training in more diverse scenarios to develop a broader palette of skills. And that training is both complicated and time-consuming.

But the research is nonetheless an indication that robots are becoming increasingly capable of operating in complex, real-world environments. That suggests they could soon be a much more visible presence all around us.

Image Credit: ETH Zurich

What Is a GPU? The Chips Powering the AI Boom, and Why They’re Worth Trillions

0

As the world rushes to make use of the latest wave of AI technologies, one piece of high-tech hardware has become a surprisingly hot commodity: the graphics processing unit, or GPU.

A top-of-the-line GPU can sell for tens of thousands of dollars, and leading manufacturer Nvidia has seen its market valuation soar past $2 trillion as demand for its products surges.

GPUs aren’t just high-end AI products, either. There are less powerful GPUs in phones, laptops, and gaming consoles, too.

By now you’re probably wondering: What is a GPU, really? And what makes them so special?

What Is a GPU?

GPUs were originally designed primarily to quickly generate and display complex 3D scenes and objects, such as those involved in video games and computer-aided design software. Modern GPUs also handle tasks such as decompressing video streams.

The “brain” of most computers is a chip called a central processing unit (CPU). CPUs can be used to generate graphical scenes and decompress videos, but they are typically far slower and less efficient at these tasks compared to GPUs. CPUs are better suited for general computation tasks, such as word processing and browsing web pages.

How Are GPUs Different From CPUs?

A typical modern CPU is made up of between 8 and 16 “cores,” each of which can process complex tasks in a sequential manner.

GPUs, on the other hand, have thousands of relatively small cores, which are designed to all work at the same time (“in parallel”) to achieve fast overall processing. This makes them well-suited for tasks that require a large number of simple operations which can be done at the same time, rather than one after another.

Traditional GPUs come in two main flavors.

First, there are standalone chips, which often come in add-on cards for large desktop computers. Second are GPUs combined with a CPU in the same chip package, which are often found in laptops and game consoles such as the PlayStation 5. In both cases, the CPU controls what the GPU does.

Why Are GPUs So Useful for AI?

It turns out GPUs can be repurposed to do more than generate graphical scenes.

Many of the machine learning techniques behind artificial intelligence, such as deep neural networks, rely heavily on various forms of matrix multiplication.

This is a mathematical operation where very large sets of numbers are multiplied and summed together. These operations are well-suited to parallel processing and hence can be performed very quickly by GPUs.

What’s Next for GPUs?

The number-crunching prowess of GPUs is steadily increasing due to the rise in the number of cores and their operating speeds. These improvements are primarily driven by improvements in chip manufacturing by companies such as TSMC in Taiwan.

The size of individual transistors—the basic components of any computer chip—is decreasing, allowing more transistors to be placed in the same amount of physical space.

However, that is not the entire story. While traditional GPUs are useful for AI-related computation tasks, they are not optimal.

Just as GPUs were originally designed to accelerate computers by providing specialized processing for graphics, there are accelerators that are designed to speed up machine learning tasks. These accelerators are often referred to as data center GPUs.

Some of the most popular accelerators, made by companies such as AMD and Nvidia, started out as traditional GPUs. Over time, their designs evolved to better handle various machine learning tasks, for example by supporting the more efficient “brain float” number format.

Other accelerators, such as Google’s tensor processing units and Tenstorrent’s Tensix cores, were designed from the ground up to speed up deep neural networks.

Data center GPUs and other AI accelerators typically come with significantly more memory than traditional GPU add-on cards, which is crucial for training large AI models. The larger the AI model, the more capable and accurate it is.

To further speed up training and handle even larger AI models, such as ChatGPT, many data center GPUs can be pooled together to form a supercomputer. This requires more complex software to properly harness the available number crunching power. Another approach is to create a single very large accelerator, such as the “wafer-scale processor” produced by Cerebras.

Are Specialized Chips the Future?

CPUs have not been standing still either. Recent CPUs from AMD and Intel have built-in low-level instructions that speed up the number-crunching required by deep neural networks. This additional functionality mainly helps with “inference” tasks—that is, using AI models that have already been developed elsewhere.

To train the AI models in the first place, large GPU-like accelerators are still needed.

It is possible to create ever more specialized accelerators for specific machine learning algorithms. Recently, for example, a company called Groq has produced a “language processing unit” (LPU) specifically designed for running large language models along the lines of ChatGPT.

However, creating these specialized processors takes considerable engineering resources. History shows the usage and popularity of any given machine learning algorithm tends to peak and then wane—so expensive specialized hardware may become quickly outdated.

For the average consumer, however, that’s unlikely to be a problem. The GPUs and other chips in the products you use are likely to keep quietly getting faster.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Nvidia

Colossal Creates Elephant Stem Cells for the First Time in Quest to Revive the Woolly Mammoth

0

The last woolly mammoth roamed the vast arctic tundra 4,000 years ago. Their genes still live on in a majestic animal today—the Asian elephant.

With 99.6 percent similarity in their genetic makeup, Asian elephants are the perfect starting point for a bold plan to bring the mammoth—or something close to it—back from extinction. The project, launched by biotechnology company Colossal in 2021, raised eyebrows for its moonshot goal.

The overall playbook sounds straightforward.

The first step is to sequence and compare the genomes of mammoth and elephant. Next, scientists will identify the genes behind the physical traits—long hair, fatty deposits—that allowed mammoths to thrive in freezing temperatures and then insert them into elephant cells using gene editing. Finally, the team will transfer the nucleus—which houses DNA—from the edited cells into an elephant egg and implant the embryo into a surrogate.

The problem? Asian elephants are endangered, and their cells—especially eggs—are hard to come by.

Last week, the company reported a major workaround. For the first time, they transformed elephant skin cells into stem cells, each with the potential to become any cell or tissue in the body.

The advance makes it easier to validate gene editing results in the lab before committing to a potential pregnancy—which lasts up to 22 months for elephants. Scientists could, for example, coax the engineered elephant stem cells to become hair cells and test for gene edits that give the mammoth its iconic thick, warm coat.

These induced pluripotent stem cells, or iPSCs, have been especially hard to make from elephant cells. The animals “are a very special species and we have only just begun to scratch the surface of their fundamental biology,” said Dr. Eriona Hysolli, who heads up biosciences at Colossal, in a press release.

Because the approach only needs a skin sample from an Asian elephant, it goes a long way to protecting the endangered species. The technology could also support conservation for living elephants by providing breeding programs with artificial eggs made from skin cells.

“Elephants might get the ‘hardest to reprogram’ prize,” said Dr. George Church, a Harvard geneticist and Colossal cofounder, “but learning how to do it anyway will help many other studies, especially on endangered species.”

Turn Back the Clock

Nearly two decades ago, Japanese biologist Dr. Shinya Yamanaka revolutionized biology by restoring mature cells to a stem cell-like state.

First demonstrated in mice, the Nobel Prize-winning technique requires only four proteins, together called the Yamanaka factors. The reprogrammed cells, often derived from skin cells, can develop into a range of tissues with further chemical guidance.

Induced pluripotent stem cells (iPSCs), as they’re called, have transformed biology. They’re critical to the process of building brain organoids—miniature balls of neurons that spark with activity—and can be coaxed into egg cells or models of early human embryos.

The technology is well-established for mice and humans. Not so for elephants. “In the past, a multitude of attempts to generate elephant iPSCs have not been fruitful,” said Hysolli.

Most elephant cells died when treated with the standard recipe. Others turned into “zombie” senescent cells—living but unable to perform their usual biological functions—or had little change from their original identity.

Further sleuthing found the culprit: A protein called TP53. Known for its ability to fight off cancer, the protein is often dubbed the genetic gatekeeper. When the gene for TP53 is turned on, the protein urges pre-cancerous cells to self-destruct without harming their neighbors.

Unfortunately, TP53 also hinders iPSC reprogramming. Some of the Yamanaka factors mimic the first stages of cancer growth which could cause edited cells to self-destruct. Elephants have a hefty 29 copies of the “protector” gene. Together, they could easily squash cells with mutated DNA, including those that have had their genes edited.

“We knew p53 was going to be a big deal,” Church told the New York Times.

To get around the gatekeeper, the team devised a chemical cocktail to inhibit TP53 production. With a subsequent dose of the reprogramming factors, they were able to make the first elephant iPSCs out of skin cells.

A series of tests showed the transformed cells looked and behaved as expected. They had genes and protein markers often seen in stem cells. When allowed to further develop into a cluster of cells, they formed a three-layered structure critical for early embryo development.

“We’ve been really waiting for these things desperately,” Church told Nature. The team published their results, which have not yet been peer-reviewed, on the preprint server bioRxiv.

Long Road Ahead

The company’s current playbook for bringing back the mammoth relies on cloning technologies, not iPSCs.

But the cells are valuable as proxies for elephant egg cells or even embryos, allowing the scientists to continue their work without harming endangered animals.

They may, for example, transform the new stem cells into egg or sperm cells—a feat so far only achieved in mice—for further genetic editing. Another idea is to directly transform them into embryo-like structures equipped with mammoth genes.

The company is also looking into developing artificial wombs to help nurture any edited embryos and potentially bring them to term. In 2017, an artificial womb gave birth to a healthy lamb, and artificial wombs are now moving towards human trials. These systems would lessen the need for elephant surrogates and avoid putting their natural reproductive cycles at risk.

As the study is a preprint, its results haven’t yet been vetted by other experts in the field. Many questions remain. For example, do the reprogrammed cells maintain their stem cell status? Can they be transformed into multiple tissue types on demand?

Reviving the mammoth is Colossal’s ultimate goal. But Dr. Vincent Lynch at the University of Buffalo, who has long tried to make iPSCs from elephants, thinks the results could have a broader reach.

Elephants are remarkably resistant to cancer. No one knows why. Because the study’s iPSCs are stripped of TP53, a cancer-protective gene, they could help scientists identify the genetic code that allows elephants to fight tumors and potentially inspire new treatments for us as well.

Next, the team hopes to recreate mammoth traits—such as long hair and fatty deposits—in cell and animal models made from gene-edited elephant cells. If all goes well, they’ll employ a technique like the one used to clone Dolly the sheep to birth the first calves.

Whether these animals can be called mammoths is still up for debate. Their genome won’t exactly match the extinct species. Further, animal biology and behavior strongly depend on interactions with the environment. Our climate has changed dramatically since mammoths went extinct 4,000 years ago. The Arctic tundra—their old home—is rapidly melting. Can the resurrected animals adjust to an environment they weren’t adapted to roam?

Animals also learn from each other. Without a living mammoth to show a calf how to be a mammoth in its natural habitat, it may adopt a completely different set of behaviors.

Colossal has a general plan to tackle these difficult questions. In the meantime, the work will help the project make headway without putting elephants at risk, according to Church.

“This is a momentous step,” said Ben Lamm, cofounder and CEO of Colossal. “Each step brings us closer to our long-term goals of bringing back this iconic species.”

Image Credit: Colossal Biosciences

Russia and China Want to Build a Nuclear Power Plant on the Moon

0

Supporting any future settlement on the moon would require considerable amounts of energy. Russia and China think a nuclear power plant is the best option, and they have plans to build one by the mid-2030s.

Lunar exploration is back in fashion these days, with a host of national space agencies as well as private companies launching missions to our nearest astronomical neighbor and announcing plans to build everything from human settlements to water mining operations and telescopes on its surface.

These ambitious plans face a major challenge though—how to power all this equipment. The go-to energy source in space is solar power, but lunar nights last 14 days, so unless we want to haul huge numbers of batteries along for the ride, it won’t suffice for more permanent installations.

That’s why Russia and China are currently working on a plan to develop a nuclear power plant that could support the pair’s ambitious joint exploration program, Yuri Borisov, the head of Russia’s space agency Roscosmos said during a recent public event.

“Today we are seriously considering a project—somewhere at the turn of 2033-2035—to deliver and install a power unit on the lunar surface together with our Chinese colleagues,” he said, according to Reuters.

Borisov provided few details other than saying that one of Russia’s main contributions to the countries’ lunar plans was its expertise in “nuclear space energy.” He added that they were also developing a nuclear-powered spaceship designed to ferry cargo around in orbit.

“We are indeed working on a space tugboat,” he said. “This huge, cyclopean structure that would be able, thanks to a nuclear reactor and high-power turbines…to transport large cargoes from one orbit to another, collect space debris, and engage in many other applications.”

Whether these plans will ever come to fruition remains unclear though, considering the increasingly dilapidated state of Russia’s space industry. Last year, the country’s Luna-25 mission, its first attempt to revisit the moon in decades, smashed into the lunar surface after experiencing problems in orbit.

Russia and China are supposed to be working together to build the so-called International Lunar Research Station at the moon’s south pole, with each country sending half a dozen spacecraft to complete the facility. But in a recent presentation on the project by senior Chinese space scientists there was no mention of Russia’s missions, according to the South China Morning Post.

The idea of launching nuclear material into space may sound like an outlandish plan, but Russia and China are far from alone. In 2022, NASA awarded companies three $5 million contracts to investigate the feasibility of a small nuclear reactor that could support the agency’s moon missions. In January, it announced it was extending the contracts, targeting a working reactor ready for launch by the early 2030s.

“The lunar night is challenging from a technical perspective, so having a source of power such as this nuclear reactor, which operates independent of the sun, is an enabling option for long-term exploration and science efforts on the moon,” NASA’s Trudy Kortes said in a statement.

NASA has given the companies plenty of leeway to design their reactors, as long as they weigh under six metric tons and can produce 40 kilowatts of electricity, enough to power 33 homes back on Earth. Crucially, they must be able to run for a decade without any human intervention.

The UK Space Agency has also given engineering giant Rolls-Royce £2.9 million ($3.7 million) to research how nuclear power could help future manned moon bases. The company unveiled a concept model of a micro nuclear reactor at the UK Space Conference last November and says it hopes to have a working version ready to send to the moon by the early 2030s.

While nuclear power’s environmental impacts and high costs are causing its popularity to fade back on Earth, it seems like it may have a promising future further out in the solar system.

Image Credit: LRO recreation of Apollo 8 Earthrise / NASA

This Week’s Awesome Tech Stories From Around the Web (Through March 9)

TECH

These Companies Have a Plan to Kill Apps
Julian Chokkattu | Wired
“Everyone wants to kill the app. There’s a wave of companies building so-called app-less phones and gadgets, leveraging artificial intelligence advancements to create smarter virtual assistants that can handle all kinds of tasks through one portal, bypassing the need for specific apps for a particular function. We might be witnessing the early stages of the first major smartphone evolution since the introduction of the iPhone—or an AI-hype-fueled gimmick.”

ARTIFICIAL INTELLIGENCE

Anthropic Sets a New Gold Standard: Your Move, OpenAI
Maxwell Zeff | Gizmodo
“Claude 3 most notably outperforms ChatGPT and Gemini in coding, one of AI’s most popular early use cases. Claude Opus scores an 85% success rate in zero-shot coding, compared to GPT-4’s 67% and Gemini’s 74%. Claude also outperforms the competition when it comes to reasoning, math problem-solving, and basic knowledge (MMLU). However, [Claude] Sonnet and [Claude] Haiku, which are cheaper and faster, are competitive with OpenAI and Google’s most advanced models as well.”

ARTIFICIAL INTELLIGENCE

Why Most AI Benchmarks Tell Us So Little
Kyle Wiggers | TechCrunch
“On Tuesday, startup Anthropic released a family of generative AI models that it claims achieve best-in-class performance. …But what metrics are they talking about? When a vendor says a model achieves state-of-the-art performance or quality, what’s that mean, exactly? Perhaps more to the point: Will a model that technically ‘performs’ better than some other model actually feel improved in a tangible way? On that last question, not likely.”

FUTURE OF WORK

AI Prompt Engineering Is Dead
Dina Genkina | IEEE Spectrum
“‘Every business is trying to use it for virtually every use case that they can imagine,’ [Austin] Henley says. To do so, they’ve enlisted the help of prompt engineers professionally. However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer. This has cast doubt on prompt engineering’s future—and increased suspicions that a fair portion of prompt-engineering jobs may be a passing fad, at least as the field is currently imagined.”

COMPUTING

D-Wave Says Its Quantum Computers Can Solve Otherwise Impossible Tasks
Matthew Sparkes | New Scientist
“Quantum computing firm D-Wave says its machines are the first to achieve ‘computational supremacy’ by solving a practically useful problem that would otherwise take millions of years on an ordinary supercomputer. …However, outside observers are more cautious.”

TRANSPORTATION

California Gives Waymo the Green Light to Expand Robotaxi Operations
Wes Davis | The Verge
“Waymo is now allowed to operate its self-driving robotaxis on highways in parts of Los Angeles and in the Bay Area following a California regulator’s approval of its expansion plans on Friday. This means the company’s cars will now be allowed to drive at up to 65mph on local roads and highways in approved areas.”

SPACE

Voyager 1, First Craft in Interstellar Space, May Have Gone Dark
Orlando Mayorquin | The New York Times
“Voyager 1 discovered active volcanoes, moons and planetary rings, proving along the way that Earth and all of humanity could be squished into a single pixel in a photograph, a ‘pale blue dot,’ as the astronomer Carl Sagan called it. It stretched a four-year mission into the present day, embarking on the deepest journey ever into space. Now, it may have bid its final farewell to that faraway dot.”

ENVIRONMENT

Pulling Gold Out of E-Waste Suddenly Becomes Super-Profitable
Paul McClure | New Atlas
“A new method for recovering high-purity gold from discarded electronics is paying back $50 for every dollar spent, according to researchers—who found the key gold-filtering substance in cheesemaking, of all places. …’The fact I love the most is that we’re using a food industry byproduct to obtain gold from electronic waste,’ said Raffaele Mezzenga, the study’s corresponding author. ‘You can’t get more sustainable than that!'”

ETHICS

5 Years After San Francisco Banned Face Recognition, Voters Ask for More Surveillance
Lauren Goode and Tom Simonite | Wired
“San Francisco made history in 2019 when its Board of Supervisors voted to ban city agencies including the police department from using face recognition. About two dozen other US cities have since followed suit. But on Tuesday, San Francisco voters appeared to turn against the idea of restricting police technology, backing a ballot proposition that will make it easier for city police to deploy drones and other surveillance tools.”

DIGITAL MEDIA

Researchers Tested Leading AI Models for Copyright Infringement Using Popular Books, and GPT-4 Performed Worst
Hayden Field | CNBC
“The four models it tested were OpenAI’s GPT-4, Anthropic’s Claude 2, Meta’s Llama 2 and Mistral AI’s Mixtral. ‘We pretty much found copyrighted content across the board, across all models that we evaluated, whether it’s open source or closed source,’ Rebecca Qian, Patronus AI’s cofounder and CTO, who previously worked on responsible AI research at Meta, told CNBC in an interview.”

SPACE

SpaceX Just Showed Us What Every Day Could Be Like in Spaceflight
Stephen Clark | Ars Technica
“Between Sunday night and Monday night, SpaceX teams in Texas, Florida, and California supervised three Falcon 9 rocket launches and completed a full dress rehearsal ahead of the next flight of the company’s giant Starship launch vehicle. This was a remarkable sequence of events, even for SpaceX, which has launched a mission at an average rate of once every three days since the start of the year. We’ve reported on this before, but it’s worth reinforcing that no launch provider, commercial or government, has ever operated at this cadence.”

AUTOMATION

AI Losing Its Grip on Fast Food Drive-Thru Lanes
Angela L. Pagán | The Takeout
“Presto’s technology does use AI voice recognition to take down orders in the drive-thru lane, but a significant portion of the process still requires an actual employee’s involvement as well. The bot takes down the order from the customer, but it is still the responsibility of the employees to input the order and ensure its accuracy. The voice assistant technology has gone through multiple iterations, but even its most advanced version is still only completing 30% of orders without the help of a human being.”

Image Credit: Pawel Czerwinski / Unsplash

This AI Can Design the Machinery of Life With Atomic Precision

0

Proteins are social creatures. They’re also chameleons. Depending on a cell’s needs, they rapidly transform in structure and grab onto other biomolecules in an intricate dance.

It’s not molecular dinner theater. Rather, these partnerships are the heart of biological processes. Some turn genes on or off. Others nudge aging “zombie” cells to self-destruct or keep our cognition and memory in tip-top shape by reshaping brain networks.

These connections have already inspired a wide range of therapies—and new therapies could be accelerated by AI that can model and design biomolecules. But previous AI tools solely focused on proteins and their interactions, casting their non-protein partners aside.

This week, a study in Science expanded AI’s ability to model a wide variety of other biomolecules that physically grab onto proteins, including the iron-containing small molecules that form the center of oxygen carriers.

Led by Dr. David Baker at the University of Washington, the new AI broadens the scope of biomolecular design. Dubbed RoseTTAFold All-Atom, it builds upon a previous protein-only system to incorporate a myriad of other biomolecules, such as DNA and RNA. It also adds small molecules—for example, iron—that are integral to certain protein functions.

The AI learned only from the sequence and structure of the components—without any idea of their 3D structure—but can map out complex molecular machines at the atomic level.

In the study, when paired with generative AI, RoseTTAFold All-Atom created proteins that easily grabbed onto a heart disease medication. The algorithm also generated proteins that regulate heme, an iron-rich molecule that helps blood carry oxygen, and bilin, a chemical in plants and bacteria that absorbs light for their metabolism.

These examples are just proofs of concept. The team is releasing RoseTTAFold All-Atom to the public for scientists so they can create multiple interacting bio-components with far more complexity than protein complexes alone. In turn, the creations could lead to new therapies.

“Our goal here was to build an AI tool that could generate more sophisticated therapies and other useful molecules,” said study author Woody Ahern in a press release.

Dream On

In 2020, Google DeepMind’s AlphaFold and Baker Lab’s RoseTTAFold solved the protein structure prediction problem that had baffled scientists for half a century and ushered in a new era of protein research. Updated versions of these algorithms mapped all protein structures both known and unknown to science.

Next, generative AI—the technology behind OpenAI’s ChatGPT and Google’s Gemini—sparked a creative frenzy of designer proteins with an impressive range of activity. Some newly generated proteins regulated a hormone that kept calcium levels in check. Others led to artificial enzymes or proteins that could readily change their shape like transistors in electronic circuits.

By hallucinating a new world of protein structures, generative AI has the potential to dream up a generation of synthetic proteins to regulate our biology and health.

But there’s a problem. Designer protein AI models have tunnel vision: They are too focused on proteins.

When envisioning life’s molecular components, proteins, DNA, and fatty acids come to mind. But inside a cell, these structures are often held together by small molecules that mesh with surrounding components, together forming a functional bio-assembly.

One example is heme, a ring-like molecule that incorporates iron. Heme is the basis of hemoglobin in red blood cells, which shuttles oxygen throughout the body and grabs onto surrounding protein “hooks” using a variety of chemical bonds.

Unlike proteins or DNA, which can be modeled as a string of molecular “letters,” small molecules and their interactions are hard to capture. But they’re critical to biology’s complex molecular machines and can dramatically alter their functions.

Which is why, in their new study, the researchers aimed to broaden AI’s scope beyond proteins.

“We set out to develop a structure prediction method capable of generating 3D coordinates for all atoms” for a biological molecule, including proteins, DNA, and other modifications, the authors wrote in their paper.

Tag Team

The team began by modifying a previous protein modeling AI to incorporate other molecules.

The AI works on three levels: The first analyzes a protein’s one-dimensional “letter” sequence, like words on a page. Next, a 2D map tracks how far each protein “word” is from another. Finally, 3D coordinates—a bit like GPS—map the overall structure of the protein.

Then comes the upgrade. To incorporate small molecule information into the model, the team added data about atomic sites and chemical connections into the first two layers.

In the third, they focused on chirality—that is, if a chemical’s structure is left or right-handed. Like our hands, chemicals can also have mirrored structures with vastly differing biological consequences. Like putting on gloves, only the correct “handedness” of a chemical can fit a given bio-assembly “glove.”

RoseTTAFold All-Atom was then trained on multiple datasets with hundreds of thousands of datapoints describing proteins, small molecules, and their interactions. Eventually, it learned general properties of small molecules useful for building plausible protein assemblies. As a sanity check, the team also added a “confidence gauge” to identify high-quality predictions—those that lead to stable and functional bio-assemblies.

Unlike previous protein-only AI models, RoseTTAFold All-Atom “can model full biomolecular systems,” wrote the team.

In a series of tests, the upgraded model outperformed previous methods when learning to “dock” small molecules onto a given protein—a key component of drug discovery—by rapidly predicting interactions between proteins and non-protein molecules.

Brave New World

Incorporating small molecules opens a whole new level of custom protein design.

As a proof of concept, the team meshed RoseTTAFold All-Atom with a generative AI model they had previously developed and designed protein partners for three different small molecules.

The first was digoxigenin, which is used to treat heart diseases but can have side effects. A protein that grabs onto it reduces toxicity. Even without prior knowledge of the molecule, the AI designed several protein binders that tempered digoxigenin levels when tested in cultured cells.

The AI also designed proteins that bind to heme, a small molecule critical for oxygen transfer in red blood cells, and bilin, which helps a variety of creatures absorb light.

Unlike previous methods, the team explained, the AI can “readily generate novel proteins” that grab onto small molecules without any expert knowledge.

It can also make highly accurate predictions about the strength of connections between proteins and small molecules at the atomic level, making it possible to rationally build a whole new universe of complex biomolecular structures.

“By empowering scientists everywhere to generate biomolecules with unprecedented precision, we’re opening the door to groundbreaking discoveries and practical applications that will shape the future of medicine, materials science, and beyond,” said Baker.

Image Credit: Ian C. Haydon

A Google AI Watched 30,000 Hours of Video Games—Now It Makes Its Own

0

AI continues to generate plenty of light and heat. The best models in text and images—now commanding subscriptions and being woven into consumer products—are competing for inches. OpenAI, Google, and Anthropic are all, more or less, neck and neck.

It’s no surprise then that AI researchers are looking to push generative models into new territory. As AI requires prodigious amounts of data, one way to forecast where things are going next is to look at what data is widely available online, but still largely untapped.

Video, of which there is plenty, is an obvious next step. Indeed, last month, OpenAI previewed a new text-to-video AI called Sora that stunned onlookers.

But what about video…games?

Ask and Receive

It turns out there are quite a few gamer videos online. Google DeepMind says it trained a new AI, Genie, on 30,000 hours of curated video footage showing gamers playing simple platformers—think early Nintendo games—and now it can create examples of its own.

Genie turns a simple image, photo, or sketch into an interactive video game.

Given a prompt, say a drawing of a character and its surroundings, the AI can then take input from a player to move the character through its world. In a blog post, DeepMind showed Genie’s creations navigating 2D landscapes, walking around or jumping between platforms. Like a snake eating its tail, some of these worlds were even sourced from AI-generated images.

In contrast to traditional video games, Genie generates these interactive worlds frame by frame. Given a prompt and command to move, it predicts the most likely next frames and creates them on the fly. It even learned to include a sense of parallax, a common feature in platformers where the foreground moves faster than the background.

Notably, the AI’s training didn’t include labels. Rather, Genie learned to correlate input commands—like, go left, right, or jump—with in-game movements simply by observing examples in its training. That is, when a character in a video moved left, there was no label linking the command to the motion. Genie figured that part out by itself. That means, potentially, future versions could be trained on as much applicable video as there is online.

The AI is an impressive proof of concept, but it’s still very early in development, and DeepMind isn’t planning to make the model public yet.

The games themselves are pixellated worlds streaming by at a plodding one frame per second. By comparison, contemporary video games can hit 60 or 120 frames per second. Also, like all generative algorithms, Genie generates strange or inconsistent visual artifacts. And it’s prone to hallucinating “unrealistic futures,” the team wrote in their paper describing the AI.

That said, there are a few reasons to believe Genie will improve from here.

Whipping Up Worlds

Because the AI can learn from unlabeled online videos and is still a modest size—just 11 billion parameters—there’s ample opportunity to scale up. Bigger models trained on more information tend to improve dramatically. And with a growing industry focused on inference—the process of by which a trained AI performs tasks, like generating images or text—it’s likely to get faster.

DeepMind says Genie could help people, like professional developers, make video games. But like OpenAI—which believes Sora is about more than videos—the team is thinking bigger. The approach could go well beyond video games.

One example: AI that can control robots. The team trained a separate model on video of robotic arms completing various tasks. The model learned to manipulate the robots and handle a variety of objects.

DeepMind also said Genie-generated video game environments could be used to train AI agents. It’s not a new strategy. In a 2021 paper, another DeepMind team outlined a video game called XLand that was populated by AI agents and an AI overlord generating tasks and games to challenge them. The idea that the next big step in AI will require algorithms that can train one another or generate synthetic training data is gaining traction.

All this is the latest salvo in an intense competition between OpenAI and Google to show progress in AI. While others in the field, like Anthropic, are advancing multimodal models akin to GPT-4, Google and OpenAI also seem focused on algorithms that simulate the world. Such algorithms may be better at planning and interaction. Both will be crucial skills for the AI agents the organizations seem intent on producing.

“Genie can be prompted with images it has never seen before, such as real world photographs or sketches, enabling people to interact with their imagined virtual worlds—essentially acting as a foundation world model,” the researchers wrote in the Genie blog post. “We focus on videos of 2D platformer games and robotics but our method is general and should work for any type of domain, and is scalable to ever larger internet datasets.”

Similarly, when OpenAI previewed Sora last month, researchers suggested it might herald something more foundational: a world simulator. That is, both teams seem to view the enormous cache of online video as a way to train AI to generate its own video, yes, but also to more effectively understand and operate out in the world, online or off.

Whether this pays dividends, or is sustainable long term, is an open question. The human brain operates on a light bulb’s worth of power; generative AI uses up whole data centers. But it’s best not to underestimate the forces at play right now—in terms of talent, tech, brains, and cash—aiming to not only improve AI but make it more efficient.

We’ve seen impressive progress in text, images, audio, and all three together. Videos are the next ingredient being thrown in the pot, and they may make for an even more potent brew.

Image Credit: Google DeepMind

CRISPRed Pork May Be Coming to a Supermarket Near You

0

Many of us appreciate a juicy pork chop or a slab of brown sugar ham. Pork is the third most consumed meat in the US, with a buzzing industry to meet demand.

But for over three decades, pig farmers have been plagued by a pesky virus that causes porcine reproductive and respiratory syndrome (PRRS). Also known as blue ear—for its most notable symptom—the virus spreads through the air like SARS-CoV-2, the bug behind Covid-19.

Infected young pigs spike a high fever with persistent coughing and are unable to gain weight. In pregnant sows, the virus often causes miscarriage or the birth of dead or stunted piglets.

According to one estimate, blue ear costs pork producers in North America more than $600 million annually. While a vaccine is available, it’s not always effective at stopping viral spread.

What if pigs couldn’t be infected in the first place?

This month, a team at Genus, a British biotechnology company focused on animal genetics, introduced a new generation of CRISPR-edited pigs completely resistant to the PRRS virus. In early embryos, the team destroyed a protein the virus exploits to attack cells. The edited piglets were completely immune to the virus, even when housed with infected peers.

Here’s the kicker. Rather than using lab-bred pigs, the team edited four genetically diverse lines of commercial pigs bred for consumption. This isn’t just a lab experiment. “It’s actually doing it in the real world,” Dr. Rodolphe Barrangou at North Carolina State University, who was not involved in the work, told Science.

With PRRS virus being a massive headache, there’s high incentive for farmers to breed virus-resistant pigs at a commercial scale. Dr. Raymond Rowland at the University of Illinois, who helped establish the first PRRS-resistant pigs in the lab, said gene editing is a way “to create a more perfect life” for animals and farmers—and ultimately, to benefit consumers too.

“The pig never gets the virus. You don’t need vaccines; you don’t need a diagnostic test. It takes everything off the table,” he told MIT Technology Review.

Genus is seeking approval for widespread distribution from the US Food and Drug Administration (FDA), which it hopes will come by the end of the year.

An Achilles Heel

The push towards marketable CRISPR pork builds on pioneering results from almost a decade ago.

The PRRS virus silently emerged in the late 1980s, and its impact was almost immediate. Like Covid-19, the virus was completely new to science and pigs, resulting in massive die-offs and birth defects. Farmers quickly set up protocols to control its spread. These will likely sound familiar: Farmers began disinfecting everything, showering and changing into clean clothes, and quarantining any potentially infected pigs.

But the virus still slipped through these preventative measures and spread like wildfire. The only solution was to cull infected animals, costing their keepers profit and heartache. Scientists eventually developed multiple vaccines and drugs to control the virus, but these are costly and burdensome and none are completely effective.

In 2016, Dr. Randall Prather at the University of Missouri asked: What if we change the pig itself? With some molecular sleuthing, his team found the entryway for the virus—a protein called CD163 that dots the surface of a type of immune cell in the lung.

Using gene editing tool CRISPR-Cas9, the team tried multiple ways to destroy the protein—inserting genetic letters, deleting some, or swapping out chunks of the gene behind CD163. Eventually they discovered a way to disable it without otherwise harming the pigs.

When challenged with a hefty dose of the PRRS virus—roughly 100,000 infectious viral particles—non-edited pigs developed severe diarrhea and their muscles wasted away, even when given extra dietary supplements. In contrast, CRISPRed pigs showed no signs of infection, and their lungs maintained a healthy, normal structure. They also readily fought off the virus when housed in close quarters with infected peers.

While promising, the results were a laboratory proof of concept. Genus has now translated this work into the real world.

Trotting On

The team started with four genetic lines of pigs used in the commercial production of pork. Veterinarians carefully extracted eggs from females under anesthesia and fertilized them in an on-site in vitro fertilization (IVF) lab. They added CRISPR into the mix at the same time, with the goal of precisely snipping out a part of CD163 that directly interacts with the virus.

Two days later, the edited embryos were implanted into surrogates that gave birth to healthy gene-edited offspring. Not all piglets had the edited gene. The team next bred those that did have the edit and eventually established a line of pigs with both copies of the CD163 gene disabled. Although CRISPR-Cas9 can have off-target effects, the piglets seemed normal. They happily chomped away at food and gained weight at a steady pace.

The edited gene persisted through generations, meaning that farmers who breed the pigs can expect it to last. The company’s experimental stations already house 435 edited of PRRS-resistant pigs, a population that could rapidly expand to thousands.

To reach supermarkets, however, Genus has regulatory hoops to jump through.

So far, the FDA has approved two genetically modified meats. One is the AquAdvantage salmon, which has a gene from another fish species to make it grow faster. Another is a GalSafe pig that is less likely to trigger allergic responses.

The agency is also tentatively considering other gene-edited farm animals under investigational food use authorization. In 2022, it declared that CRISPR-edited beef cattle—which have shorter fur coats—don’t pose a risk “to people, animals, the food supply and the environment.” But getting full approval will be a multi-year process with a hefty price tag.

“We have to go through the full, complete review system at FDA. There are no shortcuts for us,” said Clint Nesbitt, who governs regulatory affairs at the company. Meanwhile, they’re also eyeing pork-loving Colombia and China as potential markets.

Once cleared, Genus hopes to widely distribute their pigs to the livestock industry. An easy way is to ship semen from gene-edited males to breed with natural females, which would produce PRRS-resistant piglets after a few generations—basically, selective breeding on the fast track.

In the end, consumers will have the final say. Genetically modified foods have historically been polarizing. But because CRISPRed pork mimics a gene mutation that could potentially occur naturally—even though it hasn’t been documented in the animals—the public may be more open to the new meat.

As the method heads towards approval, the team is considering a similar strategy for tackling other viral diseases in livestock, such as the flu (yes, pigs get it too).

“Applying CRISPR-Cas to eliminate a viral disease represents a major step toward improving animal health,” wrote the team.

Image Credit: Pascal Debrunner / Unsplash

Gravity Experiments on the Kitchen Table: Why a Tiny, Tiny Measurement May Be a Big Leap Forward for Physics

0

Just over a week ago, European physicists announced they had measured the strength of gravity on the smallest scale ever.

In a clever tabletop experiment, researchers at Leiden University in the Netherlands, the University of Southampton in the UK, and the Institute for Photonics and Nanotechnologies in Italy measured a force of around 30 attonewtons on a particle with just under half a milligram of mass. An attonewton is a billionth of a billionth of a newton, the standard unit of force.

The researchers say the work could “unlock more secrets about the universe’s very fabric” and may be an important step toward the next big revolution in physics.

But why is that? It’s not just the result: it’s the method, and what it says about a path forward for a branch of science critics say may be trapped in a loop of rising costs and diminishing returns.

Gravity

From a physicist’s point of view, gravity is an extremely weak force. This might seem like an odd thing to say. It doesn’t feel weak when you’re trying to get out of bed in the morning!

Still, compared with the other forces that we know about—such as the electromagnetic force that is responsible for binding atoms together and for generating light, and the strong nuclear force that binds the cores of atoms—gravity exerts a relatively weak attraction between objects.

And on smaller scales, the effects of gravity get weaker and weaker.

It’s easy to see the effects of gravity for objects the size of a star or planet, but it is much harder to detect gravitational effects for small, light objects.

The Need to Test Gravity

Despite the difficulty, physicists really want to test gravity at small scales. This is because it could help resolve a century-old mystery in current physics.

Physics is dominated by two extremely successful theories.

The first is general relativity, which describes gravity and spacetime at large scales. The second is quantum mechanics, which is a theory of particles and fields—the basic building blocks of matter—at small scales.

These two theories are in some ways contradictory, and physicists don’t understand what happens in situations where both should apply. One goal of modern physics is to combine general relativity and quantum mechanics into a theory of “quantum gravity.”

One example of a situation where quantum gravity is needed is to fully understand black holes. These are predicted by general relativity—and we have observed huge ones in space—but tiny black holes may also arise at the quantum scale.

At present, however, we don’t know how to bring general relativity and quantum mechanics together to give an account of how gravity, and thus black holes, work in the quantum realm.

New Theories and New Data

A number of approaches to a potential theory of quantum gravity have been developed, including string theory, loop quantum gravity, and causal set theory.

However, these approaches are entirely theoretical. We currently don’t have any way to test them via experiments.

To empirically test these theories, we’d need a way to measure gravity at very small scales where quantum effects dominate.

Until recently, performing such tests was out of reach. It seemed we would need very large pieces of equipment: even bigger than the world’s largest particle accelerator, the Large Hadron Collider, which sends high-energy particles zooming around a 27-kilometer loop before smashing them together.

Tabletop Experiments

This is why the recent small-scale measurement of gravity is so important.

The experiment conducted jointly between the Netherlands and the UK is a “tabletop” experiment. It didn’t require massive machinery.

The experiment works by floating a particle in a magnetic field and then swinging a weight past it to see how it “wiggles” in response.

This is analogous to the way one planet “wiggles” when it swings past another.

By levitating the particle with magnets, it can be isolated from many of the influences that make detecting weak gravitational influences so hard.

The beauty of tabletop experiments like this is they don’t cost billions of dollars, which removes one of the main barriers to conducting small-scale gravity experiments, and potentially to making progress in physics. (The latest proposal for a bigger successor to the Large Hadron Collider would cost $17 billion.)

Work to Do

Tabletop experiments are very promising, but there is still work to do.

The recent experiment comes close to the quantum domain, but doesn’t quite get there. The masses and forces involved will need to be even smaller to find out how gravity acts at this scale.

We also need to be prepared for the possibility that it may not be possible to push tabletop experiments this far.

There may yet be some technological limitation that prevents us from conducting experiments of gravity at quantum scales, pushing us back toward building bigger colliders.

Back to the Theories

It’s also worth noting some of the theories of quantum gravity that might be tested using tabletop experiments are very radical.

Some theories, such as loop quantum gravity, suggest space and time may disappear at very small scales or high energies. If that’s right, it may not be possible to carry out experiments at these scales.

After all, experiments as we know them are the kinds of things that happen at a particular place, across a particular interval of time. If theories like this are correct, we may need to rethink the very nature of experimentation so we can make sense of it in situations where space and time are absent.

On the other hand, the very fact we can perform straightforward experiments involving gravity at small scales may suggest that space and time are present after all.

Which will prove true? The best way to find out is to keep going with tabletop experiments, and to push them as far as they can go.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Garik BarseghyanPixabay

This Week’s Awesome Tech Stories From Around the Web (Through March 2)

ARTIFICIAL INTELLIGENCE

Google DeepMind’s New Generative Model Makes Super Mario-Like Games From Scratch
Will Douglas Heaven | MIT Technology Review
“OpenAI’s recent reveal of its stunning generative model Sora pushed the envelope of what’s possible with text-to-video. Now Google DeepMind brings us text-to-video games. The new model, called Genie, can take a short description, a hand-drawn sketch, or a photo and turn it into a playable video game in the style of classic 2D platformers like Super Mario Bros.”

ROBOTICS

Figure Rides the Humanoid Robot Hype Wave to $2.6B Valuation
Brian Heater | TechCrunch
“[On Thursday] Figure confirmed long-standing rumors that it’s been raising more money than God. The Bay Area-based robotics firm announced a $675 million Series B round that values the startup at $2.6 billion post-money. The lineup of investors is equally impressive. It includes Microsoft, OpenAI Startup Fund, Nvidia, Amazon Industrial Innovation Fund, Jeff Bezos (through Bezos Expeditions), Parkway Venture Capital, Intel Capital, Align Ventures and ARK Invest. It’s a mind-boggling sum of money for what remains a still-young startup, with an 80-person headcount. That last bit will almost certainly change with this round.”

SCIENCE

How First Contact With Whale Civilization Could Unfold
Ross Andersen | The Atlantic
“One night last winter, over drinks in downtown Los Angeles, the biologist David Gruber told me that human beings might someday talk to sperm whales. …Gruber said that they hope to record billions of the animals’ clicking sounds with floating hydrophones, and then to decipher the sounds’ meaning using neural networks. I was immediately intrigued. For years, I had been toiling away on a book about the search for cosmic civilizations with whom we might communicate. This one was right here on Earth.”

TRANSPORTATION

RIP Apple Car. This Is Why It Died
Aarian Marshall | Wired
“After a decade of rumors, secretive developments, executive entrances and exits, and pivots, Apple reportedly told employees yesterday that its car project, internally called ‘Project Titan,’ is no more. …’Prototypes are easy, volume production is hard, positive cash flow is excruciating,’ Tesla CEO Elon Musk tweeted a few years back. It’s a lesson that would-be car companies—as well as Tesla—seem to learn again and again. Even after a decade of work, Apple never quite got to the first step.”

TECH

Apple Revolutionized the Auto Industry Without Selling a Single Car
Matteo Wong | The Atlantic
“Apple is so big, and its devices so pervasive, that it didn’t need to sell a single vehicle in order to transform the automobile industry—not through batteries and engines, but through software. The ability to link your smartphone to your car’s touch screen, which Apple pioneered 10 years ago, is now standard. Virtually every leading car company has taken an Apple-inspired approach to technology, to such a degree that ‘smartphone on wheels’ has become an industry cliché. The Apple Car already exists, and you’ve almost certainly ridden in one.”

CRYPTOCURRENCY

Bitcoin Surges Toward All-Time High as Everyone Forgets What Happened Last Time
Matt Novak | Gizmodo
“Bitcoin’s price surged past $63,000 and then receded just a bit under on Wednesday, reaching a level the crypto coin hasn’t seen since November 2021. While it still has a little way to climb to reach an all-time high of $68,000, that level feels comfortably within reach. And if you’re feeling uneasy about the rally, given what happened two years ago, you’re not alone.”

ROBOTICS

High-Speed Humanoid Feels Like a Step Change in Robotics
Loz Blain | New Atlas
“You’ve seen a ton of videos of humanoid robots—but this one feels different. It’s Sanctuary’s Phoenix bot, with ‘the world’s best robot hands,’ working totally autonomously at near-human speeds—much faster than Tesla’s or Figure’s robots.

COMPUTING

The Mindblowing Experience of a Chatbot That Answers Instantly
Steven Levy | Wired
“Groq makes chips optimized to speed up the large language models that have captured our imaginations and stoked our fears in the past year. …The experience of using a chatbot that doesn’t need even a few seconds to generate a response is shocking. I typed in a straightforward request, as you do with LLMs these days: Write a musical about AI and dentistry. I had hardly stopped typing before my screen was filled with a detailed blueprint for the two-act Mysteries of the Mouth.”

SECURITY

Here Come the AI Worms
Matt Burgess | Wired
“In a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. ‘It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,’ says Ben Nassi, a Cornell Tech researcher behind the research.”

Image Credit: Diego PH / Unsplash

Has the Lunar Gold Rush Begun? Why the First Private Moon Landing Matters

0

People have long dreamed of a bustling space economy stretching across the solar system. That vision came a step closer last week after a private spacecraft landed on the moon for the first time.

Since the start of the space race in the second half of last century, exploring beyond Earth’s orbit has been the domain of national space agencies. While private companies like SpaceX have revolutionized the launch industry, their customers are almost exclusively satellite operators seeking to provide imaging and communications services back on Earth.

But in recent years, a growing number of companies have started looking further afield, encouraged by NASA. The US space agency is eager to foster a commercial space exploration industry to help it lower the cost of upcoming missions.

And now, the program has started paying dividends after a NASA-funded mission from startup Intuitive Machines saw their Nova-C lander, which they named Odysseus, become the first privately developed spacecraft to successfully complete a soft landing on the moon’s surface.

“We’ve fundamentally changed the economics of landing on the moon,” CEO and cofounder Steve Altemus said at a news conference following the landing. “And we’ve kicked open the door for a robust, thriving cislunar economy in the future.”

Despite the momentous nature of the achievement, the touchdown wasn’t as smooth as the company may have hoped. Odysseus came in much faster than expected and missed its intended landing spot, which resulted in the spacecraft toppling over on one side. That meant some of its antennae ended up pointing at the ground, limiting the vehicle’s ability to communicate.

It turned out that this was because engineers had forgotten to flick a safety switch before launch, disabling the spacecraft’s range-finding lasers. This meant they had to jury rig a new landing system that relied on optical cameras while the mission was already underway. The company acknowledged to Reuters that a pre-flight check of the lasers would have averted the problem, but this was skipped because it would have been time-consuming and costly.

In hindsight, that might seem like an easily avoidable hiccup, but this kind of cost-consciousness is exactly why NASA is backing smaller private firms. The mission received $118 million from the agency via its Commercial Lunar Payload Services (CLPS) program, which is paying various private space firms to ferry cargo to the moon for its upcoming, manned Artemis missions.

The Intuitive Machines mission cost around $200 million, which is significantly less than what a NASA-led mission would. But it’s not just bargain prices the agency is after; it also wants providers that can launch more quickly, and the redundancy that comes from having multiple options.

Other companies involved include Astrobotic, which nearly clinched the title of first private company on the moon before propulsion problems scuppered its January mission, and Firefly Aerospace, which is due to launch its first cargo mission later this year.

NASA leaning on private companies to help complete its missions is nothing new. But both the agency and the companies themselves see this as something more than simple one-off launch contracts.

“The goal here is for us to investigate the moon in preparation for Artemis, and really to do business differently for NASA,” Sue Lederer, CLPS project scientist said during a recent press conference, according to Space.com. “One of our main goals is to make sure that we develop a lunar economy.”

What that economy would look like is still unclear. Alongside NASA instruments, Odysseus was carrying six commercial payloads, including sculptures made by artist Jeff Koons, a “secure lunar repository” of humanity’s knowledge, and an insulating material called Omni-Heat Infinity made by Columbia Sportswear.

Writing for The Conversation, David Flannery, a planetary scientist at Queensland University of Technology in Australia, suggests that once the novelty wears off, more publicity-focused payloads may prove to be an unreliable source of income. Government contracts will likely make up the bulk of these companies’ revenue, but for a true lunar economy to get into gear, that won’t be enough.

Another possibility that’s often touted is mining for local resources. Candidates include water ice, which can be used to support astronauts or create hydrogen fuel for rockets, or helium-3, a material that can be used to create ultra-cold cryogenic refrigerators or potentially be used as fuel in putative future fusion reactors.

Whether that ever turns out to be practical remains to be seen, but Altemus says the rapid progress we’ve seen since the US declared the moon a strategic interest in 2018 makes him optimistic.

“Today, over a dozen companies are building landers,” he told the BBC. “In turn, we’ve seen an increase in payloads, science instruments, and engineering systems being built for the moon. We are seeing that economy start to catch up because the prospect of landing on the moon exists.”

Image Credit: NASA JPL

Gene Silencing Slashes Cholesterol in Mice—No Gene Edits Required

0

With just one shot, scientists have slashed cholesterol levels in mice. The treatment lasted for at least half their lives.

The shot may sound like gene editing, but it’s not. Instead, it relies on an up-and-coming method to control genetic activity—without directly changing DNA letters. Called “epigenetic editing,” the technology targets the molecular machinery that switches genes on or off.

Rather than rewriting genetic letters, which can cause unintended DNA swaps, epigenetic editing could potentially be safer as it leaves the cell’s original DNA sequences intact. Scientists have long eyed the method as an alternative to CRISPR-based editing to control genetic activity. But so far, it has only been proven to work in cells grown in petri dishes.

The new study, published this week in Nature, is a first proof of concept that the strategy also works inside the body. With just a single dose of the epigenetic editor infused into the bloodstream, the mice’s cholesterol levels rapidly dropped, and stayed low for nearly a year without notable side effects.

High cholesterol is a major risk factor for heart attacks, strokes, and blood vessel diseases. Millions of people rely on daily medication to keep its levels in check, often for years or even decades. A simple, long-lasting shot could be a potential life-changer.

“The advantage here is that it’s a one-and-done treatment, instead of taking pills every day,” study author Dr. Angelo Lombardo at the San Raffaele Scientific Institute told Nature.

Beyond cholesterol, the results showcase the potential of epigenetic editing as a powerful emerging tool to tackle a wide range of diseases, including cancer.

To Dr. Henriette O’Geen at the University of California, Davis, it’s “the beginning of an era of getting away from cutting DNA” but still silencing genes that cause disease, paving the way for a new family of cures.

Leveling Up

Gene editing is revolutionizing biomedical science, with CRISPR-Cas9 leading the charge. In the last few months, the United Kingdom and the US have both given the green light for a CRISPR-based gene editing therapy for sickle cell disease and beta thalassemia.

These therapies work by replacing a dysfunctional gene with a healthy version. While effective, this requires cutting through DNA strands, which could lead to unexpected snips elsewhere in the genome. Some have even dubbed CRISPR-Cas9 a type of “genomic vandalism.”

Editing the epigenome sidesteps these problems.

Literally meaning “above” the genome, epigenetics is the process by which cells control gene expression. It’s how cells form different identities—becoming, for example, brain, liver, or heart cells—during early development, even though all cells harbor the same genetic blueprint. Epigenetics also connects environmental factors—such as diet—with gene expression by flexibly controlling gene activity.

All this relies on myriad chemical “tags” that mark our genes. Each tag has a specific function. Methylation, for example, shuts a gene down. Like sticky notes, the tags can be easily added or removed with the help of their designated proteins—without mutating DNA sequences—making it an intriguing way to manipulate gene expression.

Unfortunately, the epigenome’s flexibility could also be its downfall for designing a long-term treatment.

When cells divide, they hold onto all their DNA—including any edited changes. However, epigenetic tags are often wiped out, allowing new cells to start with a clean slate. It’s not so problematic in cells that normally don’t divide once mature—for example, neurons. But for cells that constantly renew, such as liver cells, any epigenetic edits could rapidly dwindle.

Researchers have long debated whether epigenetic editing is durable enough to work as a drug. The new study took the concern head on by targeting a gene highly expressed in the liver.

Teamwork

Meet PCSK9, a protein that keeps low-density lipoprotein (LDL), or “bad cholesterol,” in check. Its gene has long been in the crosshairs for lowering cholesterol in both pharmaceutical and gene editing studies, making it a perfect target for epigenetic control.

“It’s a well-known gene that needs to be shut off to decrease the level of cholesterol in the blood,” said Lombardo.

The end goal is to artificially methylate the gene and thus silence it. The team first turned to a family of designer molecules called zinc-finger proteins. Before the advent of CRISPR-based tools, these were a favorite for manipulating genetic activity.

Zinc-finger proteins can be designed to specifically home in on genetic sequences like a bloodhound. After screening many possibilities, the team found an efficient candidate that specifically targets PCSK9 in liver cells. They then linked this “carrier” to three protein fragments that collaborate to methylate DNA.

The fragments were inspired by a group of natural epigenetic editors that spring to life during early embryo development. Relics of past infections, our genome has viral sequences dotted throughout that are passed down through generations. Methylation silences this viral genetic “junk,” with effects often lasting an entire lifetime. In other words, nature has already come up with a long-lasting epigenetic editor, and the team tapped into its genius solution.

To deliver the editor, the researchers encoded the protein sequences into a single designer mRNA sequence—which the cells can use to produce new copies of the proteins, like in mRNA vaccines—and encapsulated it in a custom nanoparticle. Once injected into mice, the nanoparticles made their way into the liver and released their payloads. Liver cells rapidly adjusted to the new command and made the proteins that shut down PCSK9 expression.

In just two months, the mice’s PCSK9 protein levels dropped by 75 percent. The animals’ cholesterol also rapidly decreased and stayed low until the end of the study nearly a year later. The actual duration could be far longer.

Unlike gene editing, the strategy is hit-and-run, explained Lombardo. The epigenetic editors didn’t stay around inside the cell, but their therapeutic effects lingered.

As a stress test, the team performed a surgical procedure causing the liver cells to divide. This could potentially wipe out the edit. But they found it lasted multiple generations, suggesting the edited cells formed a “memory” of sorts that is heritable.

Whether these long-lasting results would translate to humans is unknown. We have far longer lifespans compared to mice and may require multiple shots. Specific aspects of the epigenetic editor also need to be reworked to better tailor them for human genes.

Meanwhile, other attempts at slashing high cholesterol levels using base editing—a type of gene editing—have already shown promise in a small clinical trial.

But the study adds to the burgeoning field of epigenetic editors. About a dozen startups are focusing on the strategy to develop therapies for a wide range of diseases, with one already in clinical trials to combat stubborn cancers.

As far as they know, the scientists believe it’s the first time someone has shown a one-shot approach can lead to long-lasting epigenetic effects in living animals, Lombardo said. “It opens up the possibility of using the platform more broadly.”

Image Credit: Google DeepMind / Unsplash

Amazon’s Billion-Dollar Investment Arm Targets Generative AI in Robotics

0

Last year, Amazon announced the next step for its growing robotic workforce. A new system, dubbed Sequioa, linked robots from across a warehouse into a single automated team that the company said significantly increased the efficiency of its operations.

The tech giant is now looking to fund a newer, smarter generation of robots. In an interview with The Financial Times, Amazon’s Franziska Bossart said the company’s billion-dollar industrial innovation fund will accelerate investments in startups combining AI and robotics.

“Generative AI holds a lot of promise for robotics and automation,” said Bossart, who heads up the fund. “[It’s an area] we are going to focus on this year.”

Generative Anything

Generative AI is, of course, still hot.

Google, Microsoft, Meta and others are battling for an early lead in the tech popularized by OpenAI’s ChatGPT. The algorithms are well-known for generating text, images, and video. But researchers believe their potential is greater. Anything with sufficiently large amounts of data is fair game. This could be the molecular structures of proteins—as we’ve seen—or the mechanical positioning data that helps robots complete real-world tasks.

Recent experiments combining generative AI and robots have already begun to yield some interesting results.

At its simplest, this has involved giving an existing robot a chatbot interface. Thanks to an internet’s worth of training data, the robot is now able to recognize nearby objects and understand nuanced commands. In a Boston Dynamics demo last year, one of the company’s robots became a tour guide thanks to ChatGPT. The bot could assume different personalities and make surprising connections it wasn’t explicitly coded for, like suggesting they consult the IT desk for a question it couldn’t answer.

Other potential applications in robotics include the generation of complex and varied simulations to train robots how to move in the physical world. Similarly, generative algorithms might also make their way into the systems controlling a robot’s movement. Early examples include Dobb-E, a robot that learns tasks from iPhone video data.

Of course, AI for images, text, and video has a clear advantage: Humanity has been stocking the internet with examples for years. Data for robots? Not so much. But that may not be the case much longer. Google and UC Berkeley’s RT-X project is assembling data from 32 robotics labs to build a GPT-4-like foundation model for robotics.

All this has begun to stir up interest from researchers and investors. And it seems Amazon, with its long track record developing and employing robots, is no exception.

Amazon End Effector

A billion dollars ain’t what it used to be. As of today, there are six technology companies valued over a trillion dollars. AI startups are attracting investments in the billions. Indeed, Amazon has separately committed up to $4 billion to OpenAI competitor Anthropic.

Still, that Amazon plans to direct significant funds into AI and robotics startups is notable. For young companies, tens of millions of dollars can be make-or-break. This is especially true given slowing venture capital investments across tech the last year.

Amazon’s industrial innovation fund, announced in 2022, has already invested in robotics startups, including Agility Robotics. The company, whose Digit robots are being tested in Amazon warehouses, opened a factory to mass-produce the robots last year. It also released a video showing how it might sprinkle in some generative AI magic.

Though there’s no official number on how much cash the Amazon fund still has at the ready, a report in The Wall Street Journal last year suggests there’s a good bit of room to run.

Bossart didn’t mention companies of interest or what kinds of tasks robots using generative AI might accomplish for Amazon. She said the fund would go after startups that help Amazon’s broad goals of increasing efficiency, safety, and delivery speed. Investments will also include a focus on “last mile” deliveries. (Agility’s Digit robot made early headlines for its potential to deliver packages to doorsteps.)

Amazon isn’t alone in its efforts to combine AI and robotics. Google, OpenAI, and others are likewise investing in the area. But of the big tech companies Amazon has the most obvious practical need for robotics in its operations, which may shape its investments and even provide a ready market for new products in its warehouses or delivery vans.

Even as AI chatbots and image and video generating algorithms continue to drive the flashiest headlines—it’s worth keeping an eye on AI in robotics too.

Image Credit: Agility

Could Shipwrecked Tardigrades Have Colonized the Moon?

0

Just over five years ago, on February 22, 2019, an unmanned space probe was placed in orbit around the moon. Named Beresheet and built by SpaceIL and Israel Aerospace Industries, it was intended to be the first private spacecraft to perform a soft landing. Among the probe’s payload were tardigrades, renowned for their ability to survive in even the harshest climates.

The mission ran into trouble from the start, with the failure of “star tracker” cameras intended to determine the spacecraft’s orientation and thus properly control its motors. Budgetary limitations had imposed a pared-down design, and while the command center was able to work around some problems, things got even trickier on April 11, the day of the landing.

On the way to the moon the spacecraft had been traveling at high speed, and it needed to be slowed way down to make a soft landing. Unfortunately during the braking maneuver a gyroscope failed, blocking the primary engine. At an altitude of 150 meters, Beresheet was still moving at 500 kilometers per hour, far too fast to be stopped in time. The impact was violent—the probe shattered, and its remains were scattered over a distance of around a hundred meters. We know this because the site was photographed by NASA’s LRO (Lunar Reconnaissance Orbiter) satellite on April 22.

Before and after images taken by NASA’s Lunar Reconnaissance Orbiter (LRO) of the Beresheet crash site. Image Credit: NASA/GSFC/Arizona State University

Animals That Can Withstand (Almost) Anything

So, what happened to the tardigrades that were traveling on the probe? Given their remarkable abilities to survive situations that would kill pretty much any other animal, could they have contaminated the moon? Worse, might they be able to reproduce and colonize it?

Tardigrades are microscopic animals that measure less than a millimeter in length. All have neurons, a mouth opening at the end of a retractable proboscis, an intestine containing a microbiota and four pairs of non-articulated legs ending in claws, and most have two eyes. As small as they are, they share a common ancestor with arthropods such as insects and arachnids.

Most tardigrades live in aquatic environments, but they can be found in any environment, even urban ones. Emmanuelle Delagoutte, a researcher at the French National Center for Scientific Research (CNRS), collects them in the mosses and lichens of the Jardin des Plantes in Paris. To be active, feed on microalgae such as chlorella, and move, grow, and reproduce, tardigrades need to be surrounded by a film of water. They reproduce sexually or asexually via parthenogenesis (from an unfertilized egg) or even hermaphroditism, when an individual (which possesses both male and female gametes) self-fertilizes. Once the egg has hatched, the active life of a tardigrade lasts from 3 to 30 months. A total of 1,265 species have been described, including two fossils.

Tardigrades are famous for their resistance to conditions that exist neither on Earth nor on the moon. They can shut down their metabolism by losing up to 95 percent of their body water. Some species synthesize a sugar, trehalose, that acts as an antifreeze, while others synthesize proteins that are thought to incorporate cellular constituents into an amorphous “glassy” network that offers resistance and protection to each cell.

During dehydration, a tardigrade’s body can shrink to half its normal size. The legs disappear, with only the claws still visible. This state, known as cryptobiosis, persists until conditions for active life become favorable again.

Depending on the species of tardigrade, individuals need more or less time to dehydrate and not all specimens of the same species manage to return to active life. Dehydrated adults survive for a few minutes at temperatures as low as -272°C or as high as 150°C and, over the long term, at high doses of gamma rays of 1,000 or 4,400 gray (Gy). By way of comparison, a dose of 10 Gy is fatal for humans, and 40-50,000 Gy sterilizes all types of material. However, whatever the dose, radiation kills tardigrade eggs. What’s more, the protection afforded by cryptobiosis is not always clear-cut, as in the case of Milnesium tardigradum, where radiation affects both active and dehydrated animals in the same way.

Image of the species Milnesium tardigradum in its active state. Image Credit: Schokraie E, Warnken U, Hotz-Wagenblatt A, Grohme MA, Hengherr S, et al. (2012), CC BY

Lunar Life?

So, what happened to the tardigrades after they crashed on the moon? Are any of them still viable, buried under the moon’s regolith, the dust that varies in depth from a few meters to several dozen meters?

First of all, they have to have survived the impact. Laboratory tests have shown that frozen specimens of the Hypsibius dujardini species traveling at 3,000 kilometers per hour in a vacuum were fatally damaged when they smashed into sand. However, they survived impacts of 2,600 kilometers per hour or less—and their “hard landing” on the moon, though unwanted, was far slower.

The moon’s surface is not protected from solar particles and cosmic rays, particularly gamma rays, but here too, the tardigrades would be able to resist. In fact, Robert Wimmer-Schweingruber, professor at the University of Kiel in Germany, and his team have shown that the doses of gamma rays hitting the lunar surface are permanent but low compared with the doses mentioned above—10 years’ exposure to gamma rays would correspond to a total dose of around 1 Gy.

Finally, the tardigrades would have to withstand a lack of water as well as temperatures ranging from -170 to -190°C during the lunar night and 100 to 120°C during the day. A lunar day or night lasts a long time, just under 15 Earth days. The probe itself wasn’t designed to withstand such extremes, and even if it hadn’t crashed, it would have ceased all activity after just a few Earth days.

Unfortunately for the tardigrades, they can’t overcome the lack of liquid water, oxygen, and microalgae—they would never be able to reactivate, much less reproduce. Their colonizing the moon is thus impossible. Still, inactive specimens are on lunar soil and their presence raises ethical questions, as Matthew Silk, an ecologist at the University of Edinburgh, points out. Moreover, at a time when space exploration is taking off in all directions, contaminating other planets could mean we would lose the opportunity to detect extraterrestrial life.

The author thanks Emmanuelle Delagoutte and Cédric Hubas of the Muséum de Paris, and Robert Wimmer-Schweingruber of the University of Kiel, for their critical reading of the text and their advice.

This article is republished from The Conversation under a Creative Commons license. Read the original article in English here or as originally published in French here

Image Credit: Schokraie E, Warnken U, Hotz-Wagenblatt A, Grohme MA, Hengherr S, et al. (2012), CC B

This Week’s Awesome Tech Stories From Around the Web (Through February 24)

0

COMPUTING

Nvidia Hardware Is Eating the World
Lauren Goode | Wired
“Talking to Jensen Huang should come with a warning label. The Nvidia CEO is so invested in where AI is headed that, after nearly 90 minutes of spirited conversation, I came away convinced the future will be a neural net nirvana. I could see it all: a robot renaissance, medical godsends, self-driving cars, chatbots that remember.”

SPACE

The Odysseus Lunar Landing Brings NASA One Step Closer to Putting Boots on the Moon
Jeffrey Kluger | Time
“The networks made much of that 52-year gulf in cosmic history, but Odysseus was significant for two other, more substantive reasons: it marked the first time a spacecraft built by a private company, not by a governmental space program, had managed a lunar landing, and it was the first time any ship had visited a spot so far in the moon’s south, down in a region where ice is preserved in permanently shadowed craters.”

BIOTECH

First Gene-Edited Meat Will Come From Disease-Proof CRISPR Pigs
Michael Le Page | New Scientist
“Pigs that are immune to a disease estimated to cost farmers $2.7 billion a year globally look set to become the first genetically modified farm animals to be used for large-scale meat production. ‘We could very well be the first,’ says Clint Nesbitt of international breeding company Genus, which has created hundreds of the CRISPR-edited pigs in preparation for a commercial launch.”

TECH

Artificial Investment
Elizabeth Lopatto | The Verge
“The AI marketing hype, arguably kicked off by OpenAI’s ChatGPT, has reached a fever pitch: investors and executives have stratospheric expectations for the technology. But the higher the expectations, the easier it is to disappoint. The stage is set for 2024 to be a year of reckoning for AI, as business leaders home in on what AI can actually do right now.”

ENERGY

Scientists Claim AI Breakthrough to Generate Boundless Clean Fusion Energy
Mirjam Guesgen | Vice
“There are many stumbling blocks on the racetrack to nuclear fusion, the reaction at the core of the sun that combines atoms to make energy: Generating more energy than it takes to power the reactors, developing reactor-proof building materials, keeping the reactor free from impurities, and restraining that fuel within it, to name a few. Now, researchers from Princeton University and its Princeton Plasma Physics Laboratory have developed an AI model that could solve that last problem.”

ARTIFICIAL INTELLIGENCE

Google’s AI Boss Says Scale Only Gets You So Far
Will Knight | Wired
“‘My belief is, to get to AGI, you’re going to need probably several more innovations as well as the maximum scale,’ Google DeepMind CEO Demis Hassabis said. ‘There’s no let up in the scaling, we’re not seeing an asymptote or anything. There are still gains to be made. So my view is you’ve got to push the existing techniques to see how far they go, but you’re not going to get new capabilities like planning or tool use or agent-like behavior just by scaling existing techniques. It’s not magically going to happen.'”

COMPUTING

The Quest for a DNA Data Drive
Rob Carlson | IEEE Spectrum
“Data is piling up exponentially, and the rate of information production is increasing faster than the storage density of tape, which will only be able to keep up with the deluge of data for a few more years. …Fortunately, we have access to an information storage technology that is cheap, readily available, and stable at room temperature for millennia: DNA, the material of genes. In a few years your hard drive may be full of such squishy stuff.”

SECURITY

GPT-4 Developer Tool Can Hack Websites Without Human Help
Jeremy Hsu | New Scientist
“That suggests individuals or organizations without hacking expertise could unleash AI agents to carry out cyber attacks. ‘You literally don’t need to understand anything—you can just let the agent go hack the website by itself,’ says Daniel Kang at the University of Illinois Urbana-Champaign. ‘We think this really reduces the expertise needed to use these large language models in malicious ways.'”

TECH

It’s the End of the Web as We Know It
Christopher Mims | The Wall Street Journal
“For decades, seeking knowledge online has meant googling it and clicking on the links the search engine offered up. …But AI is changing all of that, and fast. A new generation of AI-powered ‘answer engines’ could make finding information easier, by simply giving us the answers to our questions rather than forcing us to wade through pages of links. Meanwhile, the web is filling up with AI-generated content of dubious quality. It’s polluting search results, and making traditional search less useful.”

ENERGY

Is This New 50-Year Battery for Real?
Rhett Allain | Wired
“Wouldn’t it be cool if you never had to charge your cell phone? I’m sure that’s what a lot of people were thinking recently, when a company called BetaVolt said it had developed a coin-sized ‘nuclear battery’ that would last for 50 years. Is it for real? Yes it is. Will you be able to buy one of these forever phones anytime soon? Probably not, unfortunately, because—well, physics. Let’s see why.”

Image Credit: Luke Stackpoole / Unsplash

Elon Musk Says First Neuralink Patient Can Move Computer Cursor With Mind

0

Neural interfaces could present an entirely new way for humans to connect with technology. Elon Musk says the first human user of his startup Neuralink’s brain implant can now move a mouse cursor using their mind alone.

While brain-machine interfaces have been around for decades, they have primarily been research tools that are far too complicated and cumbersome for everyday use. But in recent years, a number of startups have cropped up promising to develop more capable and convenient devices that could help treat a host of conditions.

Neuralink is one of the firms leading that charge. Last September, the company announced it had started recruiting for the first clinical trial of its device after receiving clearance from the US Food and Drug Administration earlier in the year. And in a discussion on his social media platform X last week, Musk announced the company’s first patient was already able to control a cursor roughly a month after implantation.

“Progress is good, patient seems to have made a full recovery…and is able to control the mouse, move the mouse around the screen just by thinking,” Musk said, according to CNN. “We’re trying to get as many button presses as possible from thinking, so that’s what we’re currently working on.”

Controlling a cursor with a brain implant is nothing new—an academic team achieved the same feat as far back as 2006. And competitor Synchron, which makes a BMI that is implanted through the brain’s blood vessels, has been running a trial since 2021 in which volunteers have been able to control computers and smartphones using their mind alone.

Musk’s announcement nonetheless represents rapid progress for a company that only unveiled its first prototype in 2019. And while the company’s technology works on similar principles to previous devices, it promises far higher precision and ease of use.

That’s because each chip features 1,024 electrodes split between 64 threads thinner than a human hair that are inserted into the brain by a “sewing machine-like” robot. That is far more electrodes per unit volume than any previous BMI, which means the device should be capable of recording from many individual neurons at once.

And while most previous BMIs required patients be wired to bulky external computers, the company’s N1 implant is wireless and features a rechargeable battery. That makes it possible to record brain activity during everyday activities, greatly expanding the research potential and prospects for using it as a medical device.

Recording from individual neurons is a capability that has mainly been restricted to animal studies so far, Wael Asaad, a professor of neurosurgery and neuroscience at Brown University, told The Brown Daily Herald, so being able to do the same in humans would be a significant advance.

“For the most part, when we work with humans, we record from what are called local field potentials—which are larger scale recordings—and we’re not actually listening to individual neurons,” he said. “Higher resolution brain interfaces that are fully wireless and allow two-way communication with the brain are going to have a lot of potential uses.”

In the initial clinical trial, the device’s electrodes will be implanted in a brain region associated with motor control. But Musk has espoused much more ambitious goals for the technology, such as treating psychiatric disorders like depression, allowing people to control advanced prosthetic limbs, or even making it possible to eventually merge our minds with computers.

There’s probably a long way to go before that’s in the cards though, Justin Sanchez, from nonprofit research organization Battelle, told Wired. Decoding anything more complicated than basic motor signals or speech will likely require recording from many more neurons in different regions, most likely using multiple implants.

“There’s a huge gap between what is being done today in a very small subset of neurons versus understanding complex thoughts and more sophisticated cognitive kinds of things,” Sanchez said.

So, as impressive as the company’s progress has been so far, it’s likely to be some time before the technology is employed for anything other than a narrow set of medical applications, particularly given its invasiveness. That means most of us will be stuck with our touchscreens for the foreseeable future.

Image Credit: Neuralink

Like a Child, This Brain-Inspired AI Can Explain Its Reasoning

0

Children are natural scientists. They observe the world, form hypotheses, and test them out. Eventually, they learn to explain their (sometimes endearingly hilarious) reasoning.

AI, not so much. There’s no doubt that deep learning—a type of machine learning loosely based on the brain—is dramatically changing technology. From predicting extreme weather patterns to designing new medications or diagnosing deadly cancers, AI is increasingly being integrated at the frontiers of science.

But deep learning has a massive drawback: The algorithms can’t justify their answers. Often called the “black box” problem, this opacity stymies their use in high-risk situations, such as in medicine. Patients want an explanation when diagnosed with a life-changing disease. For now, deep learning-based algorithms—even if they have high diagnostic accuracy—can’t provide that information.

To open the black box, a team from the University of Texas Southwestern Medical Center tapped the human mind for inspiration. In a study in Nature Computational Science, they combined principles from the study of brain networks with a more traditional AI approach that relies on explainable building blocks.

The resulting AI acts a bit like a child. It condenses different types of information into “hubs.” Each hub is then transcribed into coding guidelines for humans to read—CliffsNotes for programmers that explain the algorithm’s conclusions about patterns it found in the data in plain English. It can also generate fully executable programming code to try out.

Dubbed “deep distilling,” the AI works like a scientist when challenged with a variety of tasks, such as difficult math problems and image recognition. By rummaging through the data, the AI distills it into step-by-step algorithms that can outperform human-designed ones.

“Deep distilling is able to discover generalizable principles complementary to human expertise,” wrote the team in their paper.

Paper Thin

AI sometimes blunders in the real world. Take robotaxis. Last year, some repeatedly got stuck in a San Francisco neighborhood—a nuisance to locals, but still got a chuckle. More seriously, self-driving vehicles blocked traffic and ambulances and, in one case, terribly harmed a pedestrian.

In healthcare and scientific research, the dangers can be high too.

When it comes to these high-risk domains, algorithms “require a low tolerance for error,” the American University of Beirut’s Dr. Joseph Bakarji, who was not involved in the study, wrote in a companion piece about the work.

The barrier for most deep learning algorithms is their inexplicability. They’re structured as multi-layered networks. By taking in tons of raw information and receiving countless rounds of feedback, the network adjusts its connections to eventually produce accurate answers.

This process is at the heart of deep learning. But it struggles when there isn’t enough data or if the task is too complex.

Back in 2021, the team developed an AI that took a different approach. Called “symbolic” reasoning, the neural network encodes explicit rules and experiences by observing the data.

Compared to deep learning, symbolic models are easier for people to interpret. Think of the AI as a set of Lego blocks, each representing an object or concept. They can fit together in creative ways, but the connections follow a clear set of rules.

By itself, the AI is powerful but brittle. It heavily relies on previous knowledge to find building blocks. When challenged with a new situation without prior experience, it can’t think out of the box—and it breaks.

Here’s where neuroscience comes in. The team was inspired by connectomes, which are models of how different brain regions work together. By meshing this connectivity with symbolic reasoning, they made an AI that has solid, explainable foundations, but can also flexibly adapt when faced with new problems.

In several tests, the “neurocognitive” model beat other deep neural networks on tasks that required reasoning.

But can it make sense of data and engineer algorithms to explain it?

A Human Touch

One of the hardest parts of scientific discovery is observing noisy data and distilling a conclusion. This process is what leads to new materials and medications, deeper understanding of biology, and insights about our physical world. Often, it’s a repetitive process that takes years.

AI may be able to speed things up and potentially find patterns that have escaped the human mind. For example, deep learning has been especially useful in the prediction of protein structures, but its reasoning for predicting those structures is tricky to understand.

“Can we design learning algorithms that distill observations into simple, comprehensive rules as humans typically do?” wrote Bakarji.

The new study took the team’s existing neurocognitive model and gave it an additional talent: The ability to write code.

Called deep distilling, the AI groups similar concepts together, with each artificial neuron encoding a specific concept and its connection to others. For example, one neuron might learn the concept of a cat and know it’s different than a dog. Another type handles variability when challenged with a new picture—say, a tiger—to determine if it’s more like a cat or a dog.

These artificial neurons are then stacked into a hierarchy. With each layer, the system increasingly differentiates concepts and eventually finds a solution.

Instead of having the AI crunch as much data as possible, the training is step-by-step—almost like teaching a toddler. This makes it possible to evaluate the AI’s reasoning as it gradually solves new problems.

Compared to standard neural network training, the self-explanatory aspect is built into the AI, explained Bakarji.

In a test, the team challenged the AI with a classic video game—Conway’s Game of Life. First developed in the 1970s, the game is about growing a digital cell into various patterns given a specific set of rules (try it yourself here). Trained on simulated game-play data, the AI was able to predict potential outcomes and transform its reasoning into human-readable guidelines or computer programming code.

The AI also worked well in a variety of other tasks, such as detecting lines in images and solving difficult math problems. In some cases, it generated creative computer code that outperformed established methods—and was able to explain why.

Deep distilling could be a boost for physical and biological sciences, where simple parts give rise to extremely complex systems. One potential application for the method is as a co-scientist for researchers decoding DNA functions. Much of our DNA is “dark matter,” in that we don’t know what—if any—role it has. An explainable AI could potentially crunch genetic sequences and help geneticists identify rare mutations that cause devastating inherited diseases.

Outside of research, the team is excited at the prospect of stronger AI-human collaboration.

Neurosymbolic approaches could potentially allow for more human-like machine learning capabilities,” wrote the team.

Bakarji agrees. The new study goes “beyond technical advancements, touching on ethical and societal challenges we are facing today.” Explainability could work as a guardrail, helping AI systems sync with human values as they’re trained. For high-risk applications, such as medical care, it could build trust.

For now, the algorithm works best when solving problems that can be broken down into concepts. It can’t deal with continuous data, such as video streams.

That’s the next step in deep distilling, wrote Bakarji. It “would open new possibilities in scientific computing and theoretical research.”

Image Credit: 7AV 7AV / Unsplash 

Google Just Released Two Open AI Models That Can Run on Laptops

0

Last year, Google united its AI units in Google DeepMind and said it planned to speed up product development in an effort to catch up to the likes of Microsoft and OpenAI. The stream of releases in the last few weeks follows through on that promise.

Two weeks ago, Google announced the release of its most powerful AI to date, Gemini Ultra, and reorganized its AI offerings, including its Bard chatbot, under the Gemini brand. A week later, they introduced Gemini Pro 1.5, an updated Pro model that largely matches Gemini Ultra’s performance and also includes an enormous context window—the amount of data you can prompt it with—for text, images, and audio.

Today, the company announced two new models. Going by the name Gemma, the models are much smaller than Gemini Ultra, weighing in at 2 and 7 billion parameters respectively. Google said the models are strictly text-based—as opposed to multimodal models that are trained on a variety of data, including text, images, and audio—outperform similarly sized models, and can be run on a laptop, desktop, or in the cloud. Before training, Google stripped datasets of sensitive data like personal information. They also fine-tuned and stress-tested the trained models pre-release to minimize unwanted behavior.

The models were built and trained with the same technology used in Gemini, Google said, but in contrast, they’re being released under an open license.

That doesn’t mean they’re open-source. Rather, the company is making the model weights available so developers can customize and fine-tune them. They’re also releasing developer tools to help keep applications safe and make them compatible with major AI frameworks and platforms. Google says the models can be employed for responsible commercial usage and distribution—as defined in the terms of use—for organizations of any size.

If Gemini is aimed at OpenAI and Microsoft, Gemma likely has Meta in mind. Meta is championing a more open model for AI releases, most notably for its Llama 2 large language model. Though sometimes confused for an open-source model, Meta has not released the dataset or code used to train Llama 2. Other more open models, like the Allen Institute for AI’s (AI2) recent OLMo models, do include training data and code. Google’s Gemma release is more akin to Llama 2 than OLMo.

“[Open models have] become pretty pervasive now in the industry,” Google’s Jeanine Banks said in a press briefing. “And it often refers to open weights models, where there is wide access for developers and researchers to customize and fine-tune models but, at the same time, the terms of use—things like redistribution, as well as ownership of those variants that are developed—vary based on the model’s own specific terms of use. And so we see some difference between what we would traditionally refer to as open source and we decided that it made the most sense to refer to our Gemma models as open models.”

Still, Llama 2 has been influential in the developer community, and open models from the likes of French startup, Mistral, and others are pushing performance toward state-of-the-art closed models, like OpenAI’s GPT-4. Open models may make more sense in enterprise contexts, where developers can better customize them. They’re also invaluable for AI researchers working on a budget. Google wants to support such research with Google Cloud credits. Researchers can apply for up to $500,000 in credits toward larger projects.

Just how open AI should be is still a matter of debate in the industry.

Proponents of a more open ecosystem believe the benefits outweigh the risks. An open community, they say, can not only innovate at scale, but also better understand, reveal, and solve problems as they emerge. OpenAI and others have argued for a more closed approach, contending the more powerful the model, the more dangerous it could be out in the wild. A middle road might allow an open AI ecosystem but more tightly regulate it.

What’s clear is both closed and open AI are moving at a quick pace. We can expect more innovation from big companies and open communities as the year progresses.

Image Credit: Google

This Week’s Awesome Tech Stories From Around the Web (Through February 17)

0

ARTIFICIAL INTELLIGENCE

OpenAI Teases an Amazing New Generative Video Model Called Sora
Will Douglas Heaven | MIT Technology Review
“OpenAI has built a striking new generative video model called Sora that can take a short text description and turn it into a detailed, high-definition film clip up to a minute long. …The sample videos from OpenAI’s Sora are high-definition and full of detail. OpenAI also says it can generate videos up to a minute long. One video of a Tokyo street scene shows that Sora has learned how objects fit together in 3D: the camera swoops into the scene to follow a couple as they walk past a row of shops.”

ARTIFICIAL INTELLIGENCE

Google’s Flagship AI Model Gets a Mighty Fast Upgrade
Will Knight | Wired
“Google says Gemini Pro 1.5 can ingest and make sense of an hour of video, 11 hours of audio, 700,000 words, or 30,000 lines of code at once—several times more than other AI models, including OpenAI’s GPT-4, which powers ChatGPT. …Gemini Pro 1.5 is also more capable—at least for its size—as measured by the model’s score on several popular benchmarks. The new model exploits a technique previously invented by Google researchers to squeeze out more performance without requiring more computing power.”

ROBOTICS

Surgery in Space: Tiny Remotely Operated Robot Completes First Simulated Procedure at the Space Station
Taylor Nicioli and Kristin Fisher | CNN
“The robot, known as spaceMIRA—which stands for Miniaturized In Vivo Robotic Assistant—performed several operations on simulated tissue at the orbiting laboratory while remotely operated by surgeons from approximately 250 miles (400 kilometers) below in Lincoln, Nebraska. The milestone is a step forward in developing technology that could have implications not just for successful long-term human space travel, where surgical emergencies could happen, but also for establishing access to medical care in remote areas on Earth.”

VIRTUAL REALITY

Our Unbiased Take on Mark Zuckerberg’s Biased Apple Vision Pro Review
Kyle Orland | Ars Technica
“Zuckerberg’s Instagram-posted thoughts on the Vision Pro can’t be considered an impartial take on the device’s pros and cons. Still, Zuckerberg’s short review included its fair share of fair points, alongside some careful turns of phrase that obscure the Quest’s relative deficiencies. To figure out which is which, we thought we’d consider each of the points made by Zuckerberg in his review. In doing so, we get a good viewpoint on the very different angles from which Meta and Apple are approaching mixed-reality headset design.”

FUTURE

Things Get Strange When AI Starts Training Itself
Matteo Wong | The Atlantic
“Over the past few months, Google DeepMind, Microsoft, Amazon, Meta, Apple, OpenAI, and various academic labs have all published research that uses an AI model to improve another AI model, or even itself, in many cases leading to notable improvements. Numerous tech executives have heralded this approach as the technology’s future.”

BIOTECH

Single-Dose Gene Therapy May Stop Deadly Brain Disorders in Their Tracks
Paul McClure | New Atlas
“Researchers have developed a single-dose genetic therapy that can clear protein blockages that cause motor neurone disease, also called amyotrophic lateral sclerosis, and frontotemporal dementia, two incurable neurodegenerative diseases that eventually lead to death. …The researchers found that, in mice, a single dose of CTx1000 targeted only the ‘bad’ [version of the protein] TDP-43, leaving the healthy version of it alone. Not only was it safe, it was effective even when symptoms were present at the time of treatment.”

SCIENCE FICTION

Spike Jonze’s Her Holds Up a Decade Later
Sheon Han | The Verge
“Spike Jonze’s sci-fi love story is still a better depiction of AI than many of its contemporaries. …Upon rewatching it, I noticed that this pre-AlphaGo film holds up beautifully and still offers a wealth of insight. It also doesn’t shy away from the murky and inevitably complicated feelings we’ll have toward AI, and Jonze first expressed those over a decade ago.”

TECH

OpenAI Wants to Eat Google Search’s Lunch
Maxwell Zeff | Gizmodo
“OpenAI is reportedly developing a search app that would directly compete with Google Search, according to The Information on Wednesday. The AI search engine could be a new feature for ChatGPT, or a potentially separate app altogether. Microsoft Bing would allegedly power the service from Sam Altman, which could be the most serious threat Google Search has ever faced.”

SPACE

Here’s What a Solar Eclipse Looks Like on Mars
Isaac Schultz | Gizmodo
“Typically, the Perseverance rover is looking down, scouring the Martian terrain for rocks that may reveal aspects of the planet’s ancient past. But over the last several weeks, the intrepid robot looked up and caught two remarkable views: solar eclipses on the Red Planet, as the moon Phobos and Deimos passed in front of the sun.”

Image Credit: Neeqolah Creative Works / Unsplash

Why the New York Times’ AI Copyright Lawsuit Will Be Tricky to Defend

0

The New York Times’ (NYT) legal proceedings against OpenAI and Microsoft has opened a new frontier in the ongoing legal challenges brought on by the use of copyrighted data to “train” or improve generative AI.

There are already a variety of lawsuits against AI companies, including one brought by Getty Images against Stability AI, which makes the Stable Diffusion online text-to-image generator. Authors George R.R. Martin and John Grisham have also brought legal cases against ChatGPT owner OpenAI over copyright claims. But the NYT case is not “more of the same” because it throws interesting new arguments into the mix.

The legal action focuses in on the value of the training data and a new question relating to reputational damage. It is a potent mix of trademarks and copyright and one which may test the fair use defenses typically relied upon.

It will, no doubt, be watched closely by media organizations looking to challenge the usual “let’s ask for forgiveness, not permission” approach to training data. Training data is used to improve the performance of AI systems and generally consists of real-world information, often drawn from the internet.

The lawsuit also presents a novel argument—not advanced by other, similar cases—that’s related to something called “hallucinations,” where AI systems generate false or misleading information but present it as fact. This argument could in fact be one of the most potent in the case.

The NYT case in particular raises three interesting takes on the usual approach. First, that due to their reputation for trustworthy news and information, NYT content has enhanced value and desirability as training data for use in AI.

Second, that due to the NYT’s paywall, the reproduction of articles on request is commercially damaging. Third, that ChatGPT hallucinations are causing reputational damage to the New York Times through, effectively, false attribution.

This is not just another generative AI copyright dispute. The first argument presented by the NYT is that the training data used by OpenAI is protected by copyright, and so they claim the training phase of ChatGPT infringed copyright. We have seen this type of argument run before in other disputes.

Fair Use?

The challenge for this type of attack is the fair-use shield. In the US, fair use is a doctrine in law that permits the use of copyrighted material under certain circumstances, such as in news reporting, academic work, and commentary.

OpenAI’s response so far has been very cautious, but a key tenet in a statement released by the company is that their use of online data does indeed fall under the principle of “fair use.”

Anticipating some of the difficulties that such a fair-use defense could potentially cause, the NYT has adopted a slightly different angle. In particular, it seeks to differentiate its data from standard data. The NYT intends to use what it claims to be the accuracy, trustworthiness, and prestige of its reporting. It claims that this creates a particularly desirable dataset.

It argues that as a reputable and trusted source, its articles have additional weight and reliability in training generative AI and are part of a data subset that is given additional weighting in that training.

It argues that by largely reproducing articles upon prompting, ChatGPT is able to deny the NYT, which is paywalled, visitors and revenue it would otherwise receive. This introduction of some aspect of commercial competition and commercial advantage seems intended to head off the usual fair-use defense common to these claims.

It will be interesting to see whether the assertion of special weighting in the training data has an impact. If it does, it sets a path for other media organizations to challenge the use of their reporting in the training data without permission.

The final element of the NYT’s claim presents a novel angle to the challenge. It suggests that damage is being done to the NYT brand through the material that ChatGPT produces. While almost presented as an afterthought in the complaint, it may yet be the claim that causes OpenAI the most difficulty.

This is the argument related to AI hallucinations. The NYT argues that this is compounded because ChatGPT presents the information as having come from the NYT.

The newspaper further suggests that consumers may act based on the summary given by ChatGPT, thinking the information comes from the NYT and is to be trusted. The reputational damage is caused because the newspaper has no control over what ChatGPT produces.

This is an interesting challenge to conclude with. Hallucination is a recognized issue with AI generated responses, and the NYT is arguing that the reputational harm may not be easy to rectify.

The NYT claim opens a number of lines of novel attack which move the focus from copyright on to how the copyrighted data is presented to users by ChatGPT and the value of that data to the newspaper. This is much trickier for OpenAI to defend.

This case will be watched closely by other media publishers, especially those behind paywalls, and with particular regard to how it interacts with the usual fair-use defense.

If the NYT dataset is recognized as having the “enhanced value” it claims to, it may pave the way for monetization of that dataset in training AI rather than the “forgiveness, not permission” approach prevalent today.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: AbsolutVision / Unsplash 

Scientists Say New Hybrid Beef Rice Could Cost Just a Dollar per Pound

0

Here’s a type of fusion food you don’t see every day: fluffy, steamed grains of rice, chock-full of beef cells.

It sounds Frankenstein. But the hybrid plant-animal concoction didn’t require any genetic engineering—just a hefty dose of creativity. Devised by Korean scientists, the avant-garde grains are like lab-grown meat with a dose of carbohydrates.

The hybrid rice includes grains grown with beef muscle cells and fatty tissue. Steamed together, the resulting bowl has a light pink hue and notes of cream, butter, coconut oil, and a rich beefy umami.

The rice also packs a nutritional punch, with more carbohydrates, protein, and fat than normal rice. It’s like eating rice with a small bite of beef brisket. Compared to lab-grown meat, the hybrid rice is relatively easy to grow, taking less than a week to make a small batch.

It is also surprisingly affordable. One analysis showed the market price of hybrid rice with full production would be roughly a dollar per pound. All ingredients are edible and meet food safety guidelines in Korea.

Rice is a staple food in much of the world. Protein, however, isn’t. Hybrid rice could supply a dose of much-needed protein without raising more livestock.

“Imagine obtaining all the nutrients we need from cell-cultured protein rice,” said study author Sohyeon Park at Yonsei University in a press release.

The study is the latest entry into a burgeoning field of “future foods”—with lab-grown meat being a headliner—that seek to cut down carbon dioxide emissions while meeting soaring global demand for nutritious food.

“There has been a surge of interest over the past five years in developing alternatives to conventional meat with lower environmental impacts,” said Dr. Neil Ward, an agri-food and climate specialist at the University of East Anglia who was not involved in the study. “This line of research holds promise for the development of healthier and more climate-friendly diets in future.”

Future Food

Many of us share a love for a juicy steak or a glistening burger.

But raising livestock puts enormous pressure on the environment. Their digestion and manure produce significant greenhouse gas emissions, contributing to climate change. They consume copious amounts of resources and land. With standards of living rising across many countries and an ever-increasing global population, demand for protein is rapidly growing.

How can we balance the need to feed a growing world with long-term sustainability? Here’s where “future foods” come in. Scientists have been cooking up all sorts of new-age recipes. Algae, cricket-derived proteins, and 3D-printed food are heading to a futuristic cookbook near you. Lab-grown chicken has already graced menus in upscale restaurants in Washington DC and San Francisco. Meat grown inside soy beans and other nuts has been approved in Singapore.

The problem with nut-based scaffolds, explained the team in their paper, is that they can trigger allergies. Rice, in contrast, has very few allergens. The grain grows rapidly and is a culinary staple for much of the world. While often viewed as a carbohydrate, rice also contains fats, proteins, and minerals such as calcium and magnesium.

“Rice already has a high nutrient level,” said Park. But better yet, it has a structure that can accommodate other cells—including those from animals.

Rice, Rice, Baby

The structure of a single grain of rice is like an urban highway system inside a dome. “Roads” crisscross the grain, intersecting at points but also leaving an abundance of empty space.

This structure provides lots of surface area and room for beef cells to grow, wrote the team. Like a 3D scaffold, the “roads” nudge cells in a certain direction, eventually populating most of the rice grain.

Animal cells and rice proteins don’t normally mix well. To get beef cells to stick to the rice scaffold, the team added a layer of glue made of fish gelatin, a neutral-tasting ingredient commonly used as a thickener in cooking in many Asian countries. The coating linked starchy molecules inside the rice grains to the beef cells and melted away after steaming the grains.

The study used muscle and fat cells. For seven days, the cells rested at the bottom of the rice, mingling with the grains. They thrived, growing twice as fast as they would in a petri dish.

“I didn’t expect the cells to grow so well in the rice,” said Park in the press release.

Rice can rapidly go soft and mushy inside liquids. But the fishy coating withstood the nutrient bath and supported the rice’s internal scaffolds, allowing the beef cells—either muscle or fat—to grow.

Beefy Rice

Future foods need to be tasty to catch on. This includes texture.

Like variations of pasta, different types of rice have a different bite. The hybrid rice expanded after cooking, but with more chew. When boiled or steamed, it was a bit harder and more brittle than normal rice, but with a nutty, slightly sweet and savory taste.

Compared to normal supermarket rice, the hybrid rice packed a nutritious punch. Its carbohydrate, protein, and fat levels all increased, with protein getting the biggest boost.

Eating 100 grams (3.5 ounces) of the hybrid rice is like eating the same amount of plain rice with a bite of lean beef, the authors wrote in the paper.

For all future foods, cost is the elephant in the room. The team did their homework. Their hybrid rice could have a production cycle of just three months, perhaps even shorter with optimized growing procedures. It’s also cost-effective. Rice is far more affordable than beef, and if commercialized, they estimate the price could be around a dollar a pound.

Although the scientists used beef cells in this study, a similar strategy could be used to grow chicken, shrimp, or other proteins inside rice.

Future foods offer a path towards sustainability (although some researchers have questioned the climate impact of lab-grown meat). The new study suggests engineered food can reduce the environmental impact of raising livestock. Even with lab procedures, the carbon footprint for growing hybrid rice is a fraction of farming.

While beef-scented rice may not be for everyone, the team is already envisioning “microbeef sushi” using the beef-rice hybrid or producing the grain as a “complete meal.” Because the ingredients are food safe, hybrid rice may easily navigate food regulations on its way to a supermarket near you.

“Now I see a world of possibilities for this grain-based hybrid food. It could one day serve as food relief for famine, military ration, or even space food,” said Park.

Image Credit: Dr. Jinkee Hong / Yonsei University

These Glow-in-the-Dark Flowers Will Make Your Garden Look Like Avatar

0

The sci-fi dream that gardens and parks would one day glow like Pandora, the alien moon in Avatar, is decades old. Early attempts to splice genes into plants to make them glow date back to the 1980s, but experiments emitted little light and required special food.

Then in 2020, scientists made a breakthrough. Adding genes from luminous mushrooms yielded brightly glowing specimens that needed no special care. The team has refined the approach—writing last month they’ve increased their plants’ luminescence as much as 100-fold—and spun out a startup called Light Bio to sell them.

Light Bio received USDA approval in September and this month announced the first continuously glowing plant, named the firefly petunia, is officially available for purchase in the US. The petunias look and grow like their ordinary cousins—green leaves, white flowers—but after sunset, they glow a gentle green. The company is selling the plants for $29 on its website and says a crop of 50,000 will ship in April.

“This is an incredible achievement for synthetic biology. Light Bio is bringing us leaps and bounds closer to our solarpunk dream of living in Avatar’s Pandora,” Jason Kelly, CEO and co-founder of Ginkgo Bioworks, a Light Bio partner, said in a statement.

Glow Up

In synthetic biology, glowing plants and animals have been a staple for years. Scientists will often insert a gene to make an organism glow as visual proof that some intended biological process has taken effect. Keith Wood, Light Bio cofounder and CEO, was a pioneer of the approach in plants. In 1986, he gave tobacco plants a firefly gene that produces luciferin, the molecule behind the bugs’ signature glow. Those plants glowed weakly, but needed special plant food to provide fuel for the chemical reaction. Later work tried genes from bioluminescent bacteria instead, but the plants were similarly dim.

Then in 2020, a team including Light Bio cofounders Karen Sarkisyan and Ilia Yampolsky turned to the luminous mushroom, Neonothopanus nambi. The mushroom runs a chemical reaction involving caffeic acid—a molecule also commonly found in plants—to produce luciferin and light. The scientists spliced the associated genes into tobacco plants and found the plants glowed too, no extra ingredients needed.

They later tried the genes in petunias, found the effect was even more pronounced, and began refining their work. In a paper published in Nature Methods in January, the team added genes from other mushrooms and employed directed evolution to further enhance the luminescence. After experimentation with a few collections of genes, they landed on a combination that worked in multiple species and significantly upped the brightness.

From here, they hope to further increase the luminescence by as much as 10-fold, add different colors to the lineup, and expand their work into different plant varieties.

Lab to Living Room

The plants are a scientific achievement, but the creation and approval of a commercial product is also noteworthy. Prior attempts to offer people glowing plants, including a popular 2013 Kickstarter, failed to materialize.

Last fall, the USDA gave Light Bio the go-ahead to sell their firefly petunias to the general public. The approval concluded the plants as described didn’t pose new risks to agriculture compared to naturally occurring petunias.

Jennifer Kuzma, codirector of the Genetic Engineering and Society Center at North Carolina State University, told Wired last year she would have liked the USDA to do a more thorough review. But scientists recently contacted by Nature did not voice major concerns. The plants are largely grown indoors or in gardens and aren’t considered invasive, lowering the risk the new genes would make their way into other species. Though, as Kuzma noted, that risk may depend on how many are grown and where they take root.

Beyond household appeal, the system at work here could also find its way into agricultural applications. Diego Orzáez, a plant biologist in Spain, is extending the luciferase system to other plants. He envisions such plants beginning to glow only when they’re in trouble, allowing farmers to take quick visual stock of crop health with drones or satellites.

Other new genetically modified plants are headed our way soon too. As of this month, gardeners can buy seeds for bioengineered purple tomatoes high in antioxidants. Another startup is developing a genetically engineered houseplant to filter harmful chemicals from the air. And Pairwise is using CRISPR to make softer kale, seedless berries, and pitless cherries.

“People’s reactions to genetically modified plants are complicated,” Steven Burgess, a plant biologist at the University of Illinois Urbana–Champaign, told Nature. That’s due, in part, to the association with controversial corporations and worry about what we put in our bodies. The new glow-in-the-dark petunias are neither the product of a big company—indeed, Sarkisyan said Light Bio doesn’t plan to be overly combative when it comes to people sharing plant cuttings—nor are they food. But they are compelling.

“They invite people to experience biotechnology from a position of wonder,” Drew Endy told Wired. Apart from conjuring popular sci-fi, perhaps such examples can introduce a wider audience to the possibilities and risks of synthetic biology, kickstart thoughtful conversations, and help people decide for themselves where to draw lines.

Image Credit: Light Bio

AI Is Everywhere—Including Countless Applications You’ve Likely Never Heard Of

0

Artificial intelligence is seemingly everywhere. Right now, generative AI in particular—tools like Midjourney, ChatGPT, Gemini (previously Bard), and others—is at the peak of hype.

But as an academic discipline, AI has been around for much longer than just the last couple of years. When it comes to real-world applications, many have stayed hidden or relatively unknown. These AI tools are much less glossy than fantasy-image generators—yet they are also ubiquitous.

As various AI technologies continue to progress, we’ll only see an increase of AI use in various industries. This includes healthcare and consumer tech, but also more concerning uses, such as warfare. Here’s a rundown of some of the wide-ranging AI applications you may be less familiar with.

AI in Healthcare

Various AI systems are already being used in the health field, both to improve patient outcomes and to advance health research.

One of the strengths of computer programs powered by artificial intelligence is their ability to sift through and analyze truly enormous data sets in a fraction of the time it would take a human—or even a team of humans—to accomplish.

For example, AI is helping researchers comb through vast genetic data libraries. By analyzing large data sets, geneticists can home in on genes that could contribute to various diseases, which in turn will help develop new diagnostic tests.

AI is also helping to speed up the search for medical treatments. Selecting and testing treatments for a particular disease can take ages, so leveraging AI’s ability to comb through data can be helpful here, too.

For example, United States-based non-profit Every Cure is using AI algorithms to search through medical databases to match up existing medications with illnesses they might potentially work for. This approach promises to save significant time and resources.

The Hidden AIs

Outside medical research, other fields not directly related to computer science are also benefiting from AI.

At CERN, home of the Large Hadron Collider, a recently developed advanced AI algorithm is helping physicists tackle some of the most challenging aspects of analyzing the particle data generated in their experiments.

Last year, astronomers used an AI algorithm for the first time to identify a “potentially hazardous” asteroid—a space rock that might one day collide with Earth. This algorithm will be a core part of the operations of the Vera C. Rubin Observatory currently under construction in Chile.

One major area of our lives that uses largely “hidden” AI is transportation. Millions of flights and train trips are coordinated by AI all over the world. These AI systems are meant to optimize schedules to reduce costs and maximize efficiency.

Artificial intelligence can also manage real-time road traffic by analyzing traffic patterns, volume and other factors, and then adjusting traffic lights and signals accordingly. Navigation apps like Google Maps also use AI optimization algorithms to find the best path in their navigation systems.

AI is also present in various everyday items. Robot vacuum cleaners use AI software to process all their sensor inputs and deftly navigate our homes.

The most cutting-edge cars use AI in their suspension systems so passengers can enjoy a smooth ride.

Of course, there is also no shortage of more quirky AI applications. A few years ago, UK-based brewery startup IntelligentX used AI to make custom beers for its customers. Other breweries are also using AI to help them optimize beer production.

And Meet the Ganimals is a “collaborative social experiment” from MIT Media Lab, which uses generative AI technologies to come up with new species that have never existed before.

AI Can Also Be Weaponized

On a less lighthearted note, AI also has many applications in defense. In the wrong hands, some of these uses can be terrifying.

For example, some experts have warned AI can aid the creation of bioweapons. This could happen through gene sequencing, helping non-experts easily produce risky pathogens such as novel viruses.

Where active warfare is taking place, military powers can design warfare scenarios and plans using AI. If a power uses such tools without applying ethical considerations or even deploys autonomous AI-powered weapons, it could have catastrophic consequences.

AI has been used in missile guidance systems to maximize the effectiveness of a military’s operations. It can also be used to detect covertly operating submarines.

In addition, AI can be used to predict and identify the activities and movements of terrorist groups. This way, intelligence agencies can come up with preventive measures. Since these types of AI systems have complex structures, they require high-processing power to get real-time insights.

Much has also been said about how generative AI is supercharging people’s abilities to produce fake news and disinformation. This has the potential to affect the democratic process and sway the outcomes of elections.

AI is present in our lives in so many ways, it is nearly impossible to keep track. Its myriad applications will affect us all.

This is why ethical and responsible use of AI, along with well-designed regulation, is more important than ever. This way we can reap the many benefits of AI while making sure we stay ahead of the risks.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Michael Dziedzic / Unsplash

An Antibiotic You Inhale Can Deliver Medication Deep Into the Lungs

0

We’ve all been more aware of lung health since Covid-19.

However, for people with asthma and chronic obstructive pulmonary disease (COPD), dealing with lung problems is a lifelong struggle. Those with COPD suffer from highly inflamed lung tissue that swells and obstructs airways, making it hard to breathe. The disease is common, with more than three million annual cases in the US alone.

Although manageable, there is no cure. One problem is that lungs with COPD pump out tons of viscous mucus, which forms a barrier preventing treatments from reaching lung cells. The slimy substance—when not coughed out—also attracts bacteria, further aggravating the condition.

A new study in Science Advances describes a potential solution. Scientists have developed a nanocarrier to shuttle antibiotics into the lungs. Like a biological spaceship, the carrier has “doors” that open and release antibiotics inside the mucus layer to fight infections.

The “doors” themselves are also deadly. Made from a small protein, they rip apart bacterial membranes and clean up their DNA to rid lung cells of chronic infection.

The team engineered an inhalable version of an antibiotic using the nanocarrier. In a mouse model of COPD, the treatment revived their lung cells in just three days. Their blood oxygen levels returned to normal, and previous signs of lung damage slowly healed.

“This immunoantibacterial strategy may shift the current paradigm of COPD management,” the team wrote in the article.

Breathe Me

Lungs are extremely delicate. Picture thin but flexible layers of cells separated into lobes to help coordinate oxygen flow into the body. Once air flows through the windpipe, it rapidly disperses among a complex network of branches, filling thousands of air sacs that supply the body with oxygen while ridding it of carbon dioxide.

These structures are easily damaged, and smoking is a common trigger. Cigarette smoke causes surrounding cells to pump out a slimy substance that obstructs the airway and coats air sacs, making it difficult for them to function normally.

In time, the mucus builds a sort of “glue” that attracts bacteria and condenses into a biofilm. The barrier further blocks oxygen exchange and changes the lung’s environment into one favorable for bacteria growth.

One way to stop the downward spiral is to obliterate the bacteria. Broad-spectrum antibiotics are the most widely used treatment. But because of the slimy protective layer, they can’t easily reach bacteria deep inside lung tissues. Even worse, long-term treatment increases the chance of antibiotic resistance, making it even more difficult to wipe out stubborn bacteria.

But the protective layer has a weakness: It’s just a little bit too sour. Literally.

Open-Door Policy

Like a lemon, the slimy layer is slightly more acidic compared to healthy lung tissue. This quirk gave the team an idea for an ideal antibiotic carrier that would only release its payload in an acidic environment.

The team made hollow nanoparticles out of silica—a flexible biomaterial—filled them with a common antibiotic, and added “doors” to release the drugs.

These openings are controlled by additional short protein sequences that work like “locks.” In normal airway and lung environments, they fold up at the door, essentially sequestering the antibiotics inside the bubble.

Released in lungs with COPD, the local acidity changes the structure of the lock protein, so the doors open and release antibiotics directly into the mucus and biofilm—essentially breaking through the bacterial defenses and targeting them on their home turf.

One test with the concoction penetrated a lab-grown biofilm in a petri dish. It was far more effective compared to a previous type of nanoparticle, largely because the carrier’s doors opened once inside the biofilm—in other nanoparticles, the antibiotics remained trapped.

The carriers could also dig deeper into infected areas. Cells have electrical charges. The carrier and mucus both have negative charges, which—like similarly charged ends of two magnets—push the carriers deeper into and through the mucus and biofilm layers.

Along the way, the acidity of the mucus slowly changes the carrier’s charge to positive, so that once past the biofilm, the “lock” mechanism opens and releases medication.

The team also tested the nanoparticle’s ability to obliterate bacteria. In a dish, they wiped out multiple common types of infectious bacteria and destroyed their biofilms. The treatment appeared relatively safe. Tests in human fetal lung cells in a dish found minimal signs of toxicity.

Surprisingly, the carrier itself could also destroy bacteria. Inside an acidic environment, its positive charge broke down bacterial membranes. Like popped balloons, the bugs released genetic material into their surroundings, which the carrier swept up.

Damping the Fire

Bacterial infections in the lungs attract overactive immune cells, which leads to swelling. Blood vessels surrounding air sacs also become permeable, making it easier for dangerous molecules to get through. These changes cause inflammation, making it hard to breathe.

In a mouse model of COPD, the inhalable nanoparticle treatment quieted the overactive immune system. Multiple types of immune cells returned to a healthy level of activation—allowing the mice to switch from a highly inflammatory profile to one that combats infections and inflammation.

Mice treated with the inhalable nanoparticle had about 98 percent less bacteria in their lungs, compared to those given the same antibiotic without the carrier.

Wiping out bacteria gave the mice a sigh of relief. They breathed easier.  Their blood oxygen levels went up, and blood acidity—a sign of dangerously low oxygen—returned to normal.

Under the microscope, treated lungs restored normal structures, with sturdier air sacks that slowly recovered from COPD damage. The treated mice also had less swelling in their lungs from fluid buildup that’s commonly seen in lung injuries.

The results, while promising, are only for a smoking-related COPD model in mice. There’s still much we don’t know about the treatment’s long-term consequences.

Although for now there were no signs of side effects, it’s possible the nanoparticles could accumulate inside the lungs over time eventually causing damage. And though the carrier itself damages bacterial membranes, the therapy mostly relies on the encapsulated antibiotic. With antibiotic resistance on the rise, some drugs are already losing effect for COPD.

Then there’s the chance of mechanical damage over time. Repeatedly inhaling silicon-based nanoparticles could cause lung scarring in the long term. So, while nanoparticles could shift strategies for COPD management, it’s clear we need follow-up studies, the team wrote.

Image Credit: crystal light / Shutterstock.com

This Week’s Awesome Tech Stories From Around the Web (Through February 10)

0

COMPUTING

Sam Altman Seeks Trillions of Dollars to Reshape Business of Chips and AI
Keach Hagey | The Wall Street Journal
“The OpenAI chief executive officer is in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world’s chip-building capacity, expand its ability to power AI, among other things, and cost several trillion dollars, according to people familiar with the matter. The project could require raising as much as $5 trillion to $7 trillion, one of the people said.”

AUTOMATION

AI Is Rewiring Coders’ Brains. Yours May Be Next
Will Knight | Wired
“GitHub’s owner, Microsoft, said in its latest quarterly earnings that there are now 1.3 million paid Copilot accounts—a 30 percent increase over the previous quarter—and noted that 50,000 different companies use the software. Dohmke says the latest usage data from Copilot shows that almost half of all the code produced by users is AI-generated. At the same time, he claims there is little sign that these AI programs can operate without human oversight.”

TECH

Google Prepares for a Future Where Search Isn’t King
Lauren Goode | Wired
“[Sundar] Pichai is…experimenting with a new vision for what Google offers—not replacing search, not yet, but building an alternative to see what sticks. ‘This is how we’ve always approached search, in the sense that as search evolved, as mobile came in and user interactions changed, we adapted to it,’ Pichai says, speaking with Wired ahead of the Gemini launch. ‘In some cases we’re leading users, as we are with multimodal AI. But I want to be flexible about the future, because otherwise we’ll get it wrong.'”

BIOTECH

Turbocharged CAR-T Cells Melt Tumors in Mice—Using a Trick From Cancer Cells
Asher Mullard | Nature
“The team treated mice carrying blood and solid cancers with several T-cell therapies boosted with CARD11–PIK3R3, and watched the animals’ tumors melt away. Researchers typically use around one million cells to treat these mice, says Choi, but even 20,000 of the cancer-mutation-boosted T cells were enough to wipe out tumors. ‘That’s an impressively small number of cells,’ says Nick Restifo, a cell-therapy researcher and chief scientist of the rejuvenation start-up company Marble Therapeutics in Boston, Massachusetts.”

COMPUTING

OpenAI Wants to Control Your Computer
Maxwell Zeff | Gizmodo
“OpenAI is reportedly developing ‘agent software,’ that will effectively take over your device and complete complex tasks on your behalf, according to The Information. OpenAI’s agent would work between multiple apps on your computer, performing clicks, cursor movements, and text typing. It’s really a new type of operating system, and it could change the way you interact with your computer altogether.”

TRANSPORTATION

The New Car Batteries That Could Power the Electric Vehicle Revolution
Nicola Jones | Nature
“Researchers are experimenting with different designs that could lower costs, extend vehicle ranges and offer other improvements. …Chinese manufacturers have announced budget cars for 2024 featuring batteries based not on the lithium that powers today’s best electric vehicles (EVs), but on cheap sodium—one of the most abundant elements in Earth’s crust. And a US laboratory has surprised the world with a dream cell that runs in part on air and could pack enough energy to power airplanes.”

SECURITY

I Stopped Using Passwords. It’s Great—and a Total Mess
Matt Burgess | Wired
“For the past month, I’ve been converting as many of my accounts as possible—around a dozen for now—to use passkeys and start the move away from the password for good. Spoiler: When passkeys work seamlessly, it’s a glimpse of a more secure future for millions, if not billions, of people, and a reinvention of how we sign in to websites and services. But getting there for every account across the internet is still likely to prove a minefield and take some time.”

ENERGY

Momentary Fusion Breakthroughs Face Hard Reality
Edd Gent | IEEE Spectrum
“The dream of fusion power inched closer to reality in December 2022, when researchers at Lawrence Livermore National Laboratory (LLNL) revealed that a fusion reaction had produced more energy than what was required to kick-start it. According to new research, the momentary fusion feat required exquisite choreography and extensive preparations, whose high degree of difficulty reveals a long road ahead before anyone dares hope a practicable power source could be at hand.”

ARTIFICIAL INTELLIGENCE

Meet ‘Smaug-72B’: The New King of Open-Source AI
Michael Nuñez | VentureBeat
“What’s most noteworthy about today’s release is that Smaug-72B outperforms GPT-3.5 and Mistral Medium, two of the most advanced proprietary large language models developed by OpenAI and Mistral, respectively, in several of the most popular benchmarks. While the model still falls short of the 90-100 point average indicative of human-level performance, its birth signals that open-source AI may soon rival Big Tech’s capabilities, which have long been shrouded in secrecy.”

ETHICS

AI-Generated Voices in Robocalls Can Deceive Voters. The FCC Just Made Them Illegal
Ali Swenson | Associated Press
“The [FCC] on Thursday outlawed robocalls that contain voices generated by artificial intelligence, a decision that sends a clear message that exploiting the technology to scam people and mislead voters won’t be tolerated. …The agency’s chairwoman, Jessica Rosenworcel, said bad actors have been using AI-generated voices in robocalls to misinform voters, impersonate celebrities, and extort family members. ‘It seems like something from the far-off future, but this threat is already here,’ Rosenworcel told The AP on Wednesday as the commission was considering the regulations.”

Image Credit: NASA Hubble Space Telescope / Unsplash

It Will Take Only a Single SpaceX Starship to Launch a Space Station

0

SpaceX’s forthcoming Starship rocket will make it possible to lift unprecedented amounts of material into orbit. One of its first customers will be a commercial space station, which will be launched fully assembled in a single mission.

Measuring 400 feet tall and capable of lifting 150 tons to low-Earth orbit, Starship will be the largest and most powerful rocket ever built. But with its first two test launches ending in “rapid unscheduled disassembly”—SpaceX’s euphemism for an explosion—the spacecraft is still a long way from commercial readiness.

That hasn’t stopped customers from signing up for launches. Now, a joint venture between Airbus and Voyager Space that’s building a private space station called Starlab has inked a contract with SpaceX to get it into orbit. The venture plans to put the impressive capabilities of the new rocket to full use by launching the entire 26-foot-diameter space station in one go.

“Starlab’s single-launch solution continues to demonstrate not only what is possible, but how the future of commercial space is happening now,” SpaceX’s Tom Ochinero said in a statement. “The SpaceX team is excited for Starship to launch Starlab to support humanity’s continued presence in low-Earth orbit on our way to making life multiplanetary.”

Starlab is one of several private space stations currently under development as NASA looks to find a replacement for the International Space Station, which is due to be retired in 2030. In 2021, the agency awarded $415 million in funding for new orbital facilities to Voyager Space, Northrop Grumman, and Jeff Bezos’ company Blue Origin. Axiom Space also has a contract with NASA to build a commercial module that will be attached to the ISS in 2026 and then be expanded to become an independent space station around the time its host is decommissioned.

Northrop Grumman and Voyager have since joined forces and brought Airbus on board to develop Starlab together. The space station will only have two modules—a service module that provides energy from solar panels as well as propulsion and a module with quarters for a crew of four and a laboratory. That compares to the 16 modules that make up the ISS. But at roughly twice the diameter of its predecessor, those two modules will still provide half the total volume of the ISS.

The station is designed to provide an orbital base for space agencies like NASA but also private customers and other researchers. The fact that Hilton is helping design the crew quarters suggests they will be catering to space tourists too.

Typically, space stations are launched in parts and assembled in space, but Starlab will instead be fully assembled on the ground. This not only means it will be habitable almost immediately after launch, but it also greatly simplifies the manufacturing process, Voyager CEO Dylan Taylor told Tech Crunch recently.

“Let’s say you have a station that requires multiple launches, and then you’re taking the hardware and you’re assembling it [on orbit],” he said. “Not only is that very costly, but there’s a lot of execution risk around that as well. That’s what we were trying to avoid and we’re convinced that that’s the best way to go.”

As Starship is the only rocket big enough to carry such a large payload in one go, it’s not surprising Voyager has chosen SpaceX, even though the vehicle they’re supposed to fly is still under development. The companies didn’t give a timeline for the launch.

If they pull it off, it would be a major feat of space engineering. But it’s still unclear how economically viable this new generation of private space stations will be. Ars Technica points out that it cost NASA more than $100 billion to build the ISS and another $3 billion a year to operate it.

The whole point of NASA encouraging the development of private space stations is so it can slash that bill, so it’s unlikely to be offering  anywhere near that much cash. The commercial applications for space stations are fuzzy at best, so whether space tourists and researchers will provide enough money to make up the difference remains to be seen.

But spaceflight is much cheaper these days thanks to SpaceX driving down launch costs, and the ability to launch pre-assembled space stations could further slash the overall bill. So, Starlab may well prove the doubters wrong and usher in a new era of commercial space flight.

Image Credit: Voyager Space

Partially Synthetic Moss Paves the Way for Plants With Designer Genomes

0

Synthetic biology is already rewriting life.

In late 2023, scientists revealed yeast cells with half their genetic blueprint replaced by artificial DNA. It was a “watershed” moment in an 18-year-long project to design alternate versions of every yeast chromosome. Despite having seven and a half synthetic chromosomes, the cells reproduced and thrived.

A new study moves us up the evolutionary ladder to designer plants.

For a project called SynMoss, a team in China redesigned part of a single chromosome in a type of moss. The resulting part-synthetic plant grew normally and produced spores, making it one of the first living things with multiple cells to carry a partially artificial chromosome.

The custom changes in the plant’s chromosomes are relatively small compared to the synthetic yeast. But it’s a step towards completely redesigning genomes in higher-level organisms.

In an interview with Science, synthetic biologist Dr. Tom Ellis of Imperial College London said it’s a “wake-up call to people who think that synthetic genomes are only for microbes.”

Upgrading Life

Efforts to rewrite life aren’t just to satisfy scientific curiosity.

Tinkering with DNA can help us decipher evolutionary history and pinpoint critical stretches of DNA that keep chromosomes stable or cause disease. The experiments could also help us better understand DNA’s “dark matter.” Littered across the genome, mysterious sequences that don’t encode proteins have long baffled scientists: Are they useful or just remnants of evolution?

Synthetic organisms also make it easier to engineer living things. Bacteria and yeast, for example, are already used to brew beer and pump out life-saving medications such as insulin. By adding, switching, or deleting parts of the genome, it’s possible to give these cells new capabilities.

In one recent study, for example, researchers reprogrammed bacteria to synthesize proteins using amino acid building blocks not seen in nature. In another study, a team turned bacteria into plastic-chomping Terminators that recycle plastic waste into useful materials.

While impressive, bacteria are made of cells unlike ours—their genetic material floats around, making them potentially easier to rewire.

The Synthetic Yeast Project was a breakthrough. Unlike bacteria, yeast is a eukaryotic cell. Plants, animals, and humans all fall into this category. Our DNA is protected inside a nut-like bubble called a nucleus, making it more challenging for synthetic biologists to tweak.

And as far as eukaryotes go, plants are harder to manipulate than yeast—a single-cell organism—as they contain multiple cell types that coordinate growth and reproduction. Chromosomal changes can play out differently depending on how each cell functions and, in turn, affect the health of the plant.

“Genome synthesis in multicellular organisms remains uncharted territory,” the team wrote in their paper.

Slow and Steady

Rather than building a whole new genome from scratch, the team tinkered with the existing moss genome.

This green fuzz has been extensively studied in the lab. An early analysis of the moss genome found it has 35,000 potential genes—strikingly complex for a plant. All 26 of its chromosomes have been completely sequenced.

For this reason, the plant is a “broadly used model in evolutionary developmental and cell biological studies,” wrote the team.

Moss genes readily adapt to environmental changes, especially those that repair DNA damage from sunlight. Compared to other plants—such as thale cress, another model biologists favor—moss has the built-in ability to tolerate large DNA changes and regenerate faster. Both aspects are “essential” when rewriting the genome, explained the team.

Another perk? The moss can grow into a full plant from a single cell. This ability is a dream scenario for synthetic biologists because altering genes or chromosomes in just one cell can potentially change an entire organism.

Like our own, plant chromosomes look like an “X” with two crossed arms. For this study, the team decided to rewrite the shortest chromosome arm in the plant—chromosome 18. It was still a mammoth project. Previously, the largest replacement was only about 5,000 DNA letters; the new study needed to replace over 68,000 letters.

Replacing natural DNA sequences with “the redesigned large synthetic fragments presented a formidable technical challenge,” wrote the team.

They took a divide-and-conquer strategy. They first designed mid-sized chunks of synthetic DNA before combining them into a single DNA “mega-chunk” of the chromosome arm.

The newly designed chromosome had several notable changes. It was stripped of transposons, or “jumping genes.” These DNA blocks move around the genome, and scientists are still debating if they’re essential for normal biological functions or if they contribute to disease. The team also added DNA “tags” to the chromosome to mark it as synthetic and made changes to how it regulates the manufacturing of certain proteins.

Overall, the changes reduced the size of the chromosome by nearly 56 percent. After inserting the designer chromosome into moss cells, the team nurtured them into adult plants.

A Half-Synthetic Blossom

Even with a heavily edited genome, the synthetic moss was surprisingly normal. The plants readily grew into leafy bushes with multiple branches and eventually produced spores. All reproductive structures were like those found in the wild, suggesting the half-synthetic plants had a normal life cycle and could potentially reproduce.

The plants also maintained their resilience against highly salty environments—a useful adaptation also seen in their natural counterparts.

But the synthetic moss did have some unexpected epigenetic quirks. Epigenetics is the science of how cells turn genes on or off. The synthetic part of the chromosome had a different epigenetic profile compared to natural moss, with more activated genes than usual. This could potentially be harmful, according to the team.

The moss also offered potential insights into DNA’s “dark matter,” including transposons. Deleting these jumping genes didn’t seem to harm the partially synthetic plants, suggesting they might not be essential to their health.

More practically, the results could boost biotechnology efforts using moss to produce a wide range of therapeutic proteins, including ones that combat heart disease, heal wounds, or treat stroke. Moss is already used to synthesize medical drugs. A partially designer genome could alter its metabolism, boost its resilience against infections, and increase yield.

The next step is to replace the entirety of chromosome 18’s short arm with synthetic sequences. They’re aiming to generate an entire synthetic moss genome within 10 years.

It’s an ambitious goal. Compared to the yeast genome, which took 18 years and a global collaboration to rewrite half of it, the moss genome is 40 times bigger. But with increasingly efficient and cheaper DNA reading and synthesis technologies, the goal isn’t beyond reach.

Similar techniques could also inspire other projects to redesign chromosomes in organisms beyond bacteria and yeast, from plants to animals.

Image Credit: Pyrex / Wikimedia Commons

Scientists ‘Astonished’ Yet Another of Saturn’s Moons May Be an Ocean World

0

Liquid water is a crucial prerequisite for life as we know it. When astronomers first looked out into the solar system, it seemed Earth was a special case in this respect. They found enormous balls of gas, desert worlds, blast furnaces, and airless hellscapes. But evidence is growing that liquid water isn’t rare at all—it’s just extremely well-hidden.

The list of worlds with subsurface oceans in our solar system is getting longer by the year. Of course, many people are familiar with the most obvious cases: The icy moons Enceladus and Europa are literally bursting at the seams with water. But other less obvious candidates have joined their ranks, including Callisto, Ganymede, Titan, and even, perhaps, Pluto.

Now, scientists argue in a paper in Nature that we may have reason to add yet another long-shot to the list: Saturn’s “Death Star” moon, Mimas. Nicknamed for the giant impact crater occupying around a third of its diameter, Mimas has been part of the conversation for years. But a lack of clear evidence on its surface made scientists skeptical it could be hiding an interior ocean.

The paper, which contains fresh analysis of observations made by the Cassini probe, says changes in the moon’s orbit over time are best explained by the presence of a global ocean deep below its icy crust. The team believes the data also suggests the ocean is very young, explaining why it has yet to make its presence known on the surface.

“The major finding here is the discovery of habitability conditions on a solar system object which we would never, never expect to have liquid water,” Valery Lainey, first author and scientist at the Observatoire de Paris, told Space.com. “It’s really astonishing.”

The Solar System Is Sopping

How exactly do frozen moons on the outskirts of the solar system come to contain whole oceans of liquid water?

In short: Combine heat and a good amount of ice and you get oceans. We know there is an abundance of ice in the outer solar system, from moons to comets. But heat? Not so much. The further out you go, the more the sun fades into the starry background.

Interior ocean worlds depend on another source of heat—gravity. As they orbit Jupiter or Saturn, enormous gravitational shifts flex and warp their insides. The friction from this grinding, called tidal flexing, produces heat which melts ice to form salty oceans.

And the more we look, the more we find evidence of hidden oceans throughout the outer solar system. Some are thought to have more liquid water than Earth, and where there’s liquid water, there just might be life—at least, that’s what we want to find out.

Yet Another Ocean World?

Speculation that Mimas might be an ocean world isn’t new. A decade ago, small shifts in the moon’s orbit measured by Cassini suggested it either had a strangely pancake-shaped core or an interior ocean. Scientists thought the latter was a long shot because—unlike the cracked but largely crater-free surfaces of Enceladus and Europa—Mimas’s surface is pocked with craters, suggesting it has been largely undisturbed for eons.

The new study aimed for a more precise look at the data to better weigh the possibilities. According to modeling using more accurate calculations, the team found a pancake-shaped core is likely impossible. To fit observations, its ends would have to extend beyond the surface: “This is incompatible with observations,” they wrote.

So they looked to the interior ocean hypothesis and modeled a range of possibilities. The models not only fit Mimas’s orbit well, they also suggest the ocean likely begins 20 to 30 kilometers below the surface. The team believes the ocean would likely be relatively young, somewhere between a few million years old and 25 million years old. The combination of depth and youth could explain why the moon’s surface remains largely undisturbed.

But what accounts for this youth? The team suggests relatively recent gravitational encounters—perhaps with other moons or during the formation of Saturn’s ring system, which some scientists believe to be relatively young also—may have changed the degree of tidal flexing inside Mimas. The associated heat only recently became great enough to melt ice into oceans.

Take Two

It’s a compelling case, but still unproven. Next steps would involve more measurements taken by a future mission. If these measurements match predictions made in the paper, scientists might confirm the ocean’s existence as well as its depth below the surface.

Studying a young, still-evolving interior ocean could give us clues about how older, more stable oceans formed in eons past. And the more liquid water we find in our own solar system, the more likely it’s common through the galaxy. If water worlds—either in the form of planets or moons—are a dime a dozen, what does that say about life?

This is, of course, still one of the biggest questions in science. But each year, thanks to clues gathered in our solar system and beyond, we’re stepping closer to an answer.

Image Credit: NASA/JPL/Space Science Institute

This AI Is Learning to Decode the ‘Language’ of Chickens

0

Have you ever wondered what chickens are talking about? Chickens are quite the communicators—their clucks, squawks, and purrs are not just random sounds but a complex language system. These sounds are their way of interacting with the world and expressing joy, fear, and social cues to one another.

Like humans, the “language” of chickens varies with age, environment, and surprisingly, domestication, giving us insights into their social structures and behaviors. Understanding these vocalizations can transform our approach to poultry farming, enhancing chicken welfare and quality of life.

At Dalhousie University, my colleagues and I are conducting research that uses artificial intelligence to decode the language of chickens. It’s a project that’s set to revolutionize our understanding of these feathered creatures and their communication methods, offering a window into their world that was previously closed to us.

Chicken Translator

The use of AI and machine learning in this endeavor is like having a universal translator for chicken speech. AI can analyze vast amounts of audio data. As our research, yet to be peer-reviewed, is documenting, our algorithms are learning to recognize patterns and nuances in chicken vocalizations. This isn’t a simple task—chickens have a range of sounds that vary in pitch, tone, and context.

But by using advanced data analysis techniques, we’re beginning to crack their code. This breakthrough in animal communication is not just a scientific achievement; it’s a step towards more humane and empathetic treatment of farm animals.

One of the most exciting aspects of this research is understanding the emotional content behind these sounds. Using natural language processing (NLP), a technology often used to decipher human languages, we’re learning to interpret the emotional states of chickens. Are they stressed? Are they content? By understanding their emotional state, we can make more informed decisions about their care and environment.

Non-Verbal Chicken Communication

In addition to vocalizations, our research also delves into non-verbal cues to gauge emotions in chickens. Our research has also explored chickens’ eye blinks and facial temperatures. How these might be reliable indicators of chickens’ emotional states is examined in a preprint (not-yet-peer-reviewed) paper.

By using non-invasive methods like video and thermal imaging, we’ve observed changes in temperature around the eye and head regions, as well as variations in blinking behavior, which appear to be responses to stress. These preliminary findings are opening new avenues in understanding how chickens express their feelings, both behaviorally and physiologically, providing us with additional tools to assess their well-being.

Happier Fowl

This project isn’t just about academic curiosity; it has real-world implications. In the agricultural sector, understanding chicken vocalizations can lead to improved farming practices. Farmers can use this knowledge to create better living conditions, leading to healthier and happier chickens. This, in turn, can impact the quality of produce, animal health, and overall farm efficiency.

The insights gained from this research can also be applied to other areas of animal husbandry, potentially leading to breakthroughs in the way we interact with and care for a variety of farm animals.

But our research goes beyond just farming practices. It has the potential to influence policies on animal welfare and ethical treatment. As we grow to understand these animals better, we’re compelled to advocate for their well-being. This research is reshaping how we view our relationship with animals, emphasizing empathy and understanding.

a man reaches into a chicken coop filled with chicken
Understanding animal communication and behavior can impact animal welfare policies. Image Credit: Unsplash/Zoe Schaeffer

Ethical AI

The ethical use of AI in this context sets a precedent for future technological applications in animal science. We’re demonstrating that technology can and should be used for the betterment of all living beings. It’s a responsibility that we take seriously, ensuring that our advancements in AI are aligned with ethical principles and the welfare of the subjects of our study.

The implications of our research extend to education and conservation efforts as well. By understanding the communication methods of chickens, we gain insights into avian communication in general, providing a unique perspective on the complexity of animal communication systems. This knowledge can be vital for conservationists working to protect bird species and their habitats.

As we continue to make strides in this field, we are opening doors to a new era in animal-human interaction. Our journey into decoding chicken language is more than just an academic pursuit: It’s a step towards a more empathetic and responsible world.

By leveraging AI, we’re not only unlocking the secrets of avian communication but also setting new standards for animal welfare and ethical technological use. It’s an exciting time, as we stand on the cusp of a new understanding between humans and the animal world, all starting with the chicken.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Ben Moreland / Unsplash 

A One-and-Done Injection to Slow Aging? New Study in Mice Opens the Possibility

0

A preventative anti-aging therapy seems like wishful thinking.

Yet a new study led by Dr. Corina Amor Vegas at Cold Spring Harbor Laboratory describes a treatment that brings the dream to life—at least for mice. Given a single injection in young adulthood, they aged more slowly compared to their peers.

By the equivalent of roughly 65 years of age in humans, the mice were slimmer, could better regulate blood sugar and insulin levels, and had lower inflammation and a more youthful metabolic profile. They even kept up their love for running, whereas untreated seniors turned into couch potatoes.

The shot is made up of CAR (chimeric antigen receptor) T cells. These cells are genetically engineered from the body’s T cells—a type of immune cell adept at hunting down particular targets in the body.

CAR T cells first shot to fame as a revolutionary therapy for previously untreatable blood cancers. They’re now close to tackling other medical problems, such as autoimmune disorders, asthma, liver and kidney diseases, and even HIV.

The new study took a page out of CAR T’s cancer-fighting playbook. But instead of targeting cancer cells, they engineered them to hunt down and destroy senescent cells, a type of cell linked to age-related health problems. Often dubbed “zombie cells,” they accumulate with age and pump out a toxic chemical brew that damages surrounding tissues. Zombie cells have been in the crosshairs of longevity researchers and investors alike. Drugs that destroy the cells called senolytics are now a multi-billion-dollar industry.

The new treatment, called senolytic CAR T, also turned back the clock when given to elderly mice. Like humans, the risk of diabetes increases with age in mice. By clearing out zombie cells in multiple organs, the mice could handle sugar rushes without a hitch. Their metabolism improved, and they began jumping around and running like much younger mice.

“If we give it to aged mice, they rejuvenate. If we give it to young mice, they age slower. No other therapy right now can do this,” said Amor Vegas in a press release.

The Walking Dead

Zombie cells aren’t always evil.

They start out as regular cells. As damage to their DNA and internal structures accumulates over time, the body “locks” the cells into a special state called senescence. When young, this process helps prevent cells from turning cancerous by limiting their ability to divide. Although still living, the cells can no longer perform their usual jobs. Instead, they release a complex cocktail of chemicals that alerts the body’s immune system—including T cells—to clear them out. Like spring cleaning, this helps keep the body functioning normally.

With age, however, zombie cells linger. They amp up inflammation, leading to age-related diseases such as cancer, tissue scarring, and blood vessel and heart conditions. Senolytics—drugs that destroy these cells—improve these conditions and increase life span in mice.

But like a pill of Advil, senolytics don’t last long inside the body. To keep zombie cells at bay, repeated doses are likely necessary.

A Perfect Match

Here’s where CAR T cells come in. Back in 2020, Amor Vegas and colleagues designed a “living” senolytic T cell that tracks down and kills zombie cells.

All cells are dotted with protein “beacons” that stick out from their surfaces. Different cell types have unique assortments of these proteins. The team found a protein “beacon” on zombie cells called uPAR. The protein normally occurs at low levels in most organs, but it ramps up in zombie cells, making it a perfect target for senolytic CAR T cells.

In a test, the therapy eliminated senescent cells in mouse models with liver and lung cancers. But surprisingly, the team also found that young mice receiving the treatment had better liver health and metabolism—both of which contribute to age-related diseases.

Can a similar treatment also extend health during aging?

A Living Anti-Aging Drug

The team first injected senolytic CAR T cells into elderly mice aged the equivalent of roughly 65 human years old. Within 20 days, they had lower numbers of zombie cells throughout their bodies, particularly in their livers, fatty tissues, and pancreases. Inflammation levels caused by zombie cells went down, and the mice’s immune profiles reversed to a more youthful state.

In both mice and humans, metabolism tends to go haywire with age. Our ability to handle sugars and insulin decreases, which can lead to diabetes.

With senolytic CAR T therapy, the elderly mice could regulate their blood sugar levels far better than non-treated peers. They also had lower baseline insulin levels after fasting, which rapidly increased when given a sugary treat—a sign of a healthy metabolism.

A potentially dangerous side effect of CAR T is an overzealous immune response. Although the team saw signs of the side effect in young animals at high doses, lowering the amount of the therapy was safe and effective in elderly mice.

Young and Beautiful

Chemical senolytics only last a few hours inside the body. Practically, this means they may need to be consistently taken to keep zombie cells at bay.

CAR T cells, on the other hand, have a far longer lifespan, which can last over 10 years after an initial infusion inside the body. They also “train” the immune system to learn about a new threat—in this case, senescent cells.

“T cells have the ability to develop memory and persist in your body for really long periods, which is very different from a chemical drug,” said Amor Vegas. “With CAR T cells, you have the potential of getting this one treatment, and then that’s it.”

To test how long senolytic CAR T cells can persist in the body, the team infused them into young adult mice and monitored their health as they aged. The engineered cells were dormant until senescent cells began to build up, then they reactivated and readily wiped out the zombie cells.

With just a single shot, the mice aged gracefully. They had lower blood sugar levels, better insulin responses, and were more physically active well into old age.

But mice aren’t people. Their life spans are far shorter than ours. The effects of senolytic CAR T cells may not last as long in our bodies, potentially requiring multiple doses. The treatment can also be dangerous, sometimes triggering a violent immune response that damages organs. Then there’s the cost factor. CAR T therapies are out of reach for most people—a single dose is priced at hundreds of thousands of dollars for cancer treatments.

Despite these problems, the team is cautiously moving forward.

“With CAR T cells, you have the potential of getting this one treatment, and then that’s it,” said Amor Vegas. For chronic age-related diseases, that’s a potential life-changer. “Think about patients who need treatment multiple times per day versus you get an infusion, and then you’re good to go for multiple years.”

Image Credit: Senescent cells (blue) in healthy pancreatic tissue samples from an old mouse treated with CAR T cells as a pup / Cold Spring Harbor Laboratory

This Week’s Awesome Tech Stories From Around the Web (Through February 3)

0

ARTIFICIAL INTELLIGENCE

I Tested a Next-Gen AI Assistant. It Will Blow You Away
Will Knight | Wired
“When the fruits of the recent generative AI boom get properly integrated into…legacy assistant bots [like Siri and Alexa], they will surely get much more interesting. ‘A year from now, I would expect the experience of using a computer to look very different,’ says Shah, who says he built vimGPT in only a few days. ‘Most apps will require less clicking and more chatting, with agents becoming an integral part of browsing the web.'”

BIOTECH

CRISPR Gene Therapy Seems to Cure Dangerous Inflammatory Condition
Clare Wilson | New Scientist
“Ten people who had the one-off gene treatment that is given directly into the body saw their number of ‘swelling attacks’ fall by 95 percent in the first six months as the therapy took effect. Since then, all but one have had no further episodes for at least a further year, while one person who had the lowest dose of the treatment had one mild attack. ‘This is potentially a cure,’ says Padmalal Gurugama at Cambridge University Hospitals in the UK, who worked on the new approach.”

VIRTUAL REALITY

Apple Vision Pro Review: Magic, Until It’s Not
Nilay Patel | The Verge
“The Vision Pro is an astounding product. It’s the sort of first-generation device only Apple can really make, from the incredible display and passthrough engineering, to the use of the whole ecosystem to make it so seamlessly useful, to even getting everyone to pretty much ignore the whole external battery situation. …But the shocking thing is that Apple may have inadvertently revealed that some of these core ideas are actually dead ends—that they can’t ever be executed well enough to become mainstream.”

ARTIFICIAL INTELLIGENCE

Allen Institute for AI Releases ‘Truly Open Source’  LLM to Drive ‘Critical Shift’ in AI Development
Sharon Goldman | VentureBeat
“While other models have included the model code and model weights, OLMo also provides the training code, training data and associated toolkits, as well as evaluation toolkits. In addition, OLMo was released under an open source initiative (OSI) approved license, with AI2 saying that ‘all code, weights, and intermediate checkpoints are released under the Apache 2.0 License.’ The news comes at a moment when open source/open science AI, which has been playing catch-up to closed, proprietary LLMs like OpenAI’s GPT-4 and Anthropic’s Claude, is making significant headway.”

ROBOTICS

This Robot Can Tidy a Room Without Any Help
Rhiannon Williams | MIT Technology Review
“While robots may easily complete tasks like [picking up and moving things] in a laboratory, getting them to work in an unfamiliar environment where there’s little data available is a real challenge. Now, a new system called OK-Robot could train robots to pick up and move objects in settings they haven’t encountered before. It’s an approach that might be able to plug the gap between rapidly improving AI models and actual robot capabilities, as it doesn’t require any additional costly, complex training.”

FUTURE

People Are Worried That AI Will Take Everyone’s Jobs. We’ve Been Here Before.
David Rotman | MIT Technology Review
“[Karl T. Compton’s 1938] essay concisely framed the debate over jobs and technical progress in a way that remains relevant, especially given today’s fears over the impact of artificial intelligence. …While today’s technologies certainly look very different from those of the 1930s, Compton’s article is a worthwhile reminder that worries over the future of jobs are not new and are best addressed by applying an understanding of economics, rather than conjuring up genies and monsters.”

HEALTH

Experimental Drug Cuts Off Pain at the Source, Company Says
Gina Kolata | The New York Times
“Vertex Pharmaceuticals of Boston announced [this week] that it had developed an experimental drug that relieves moderate to severe pain, blocking pain signals before they can get to the brain. It works only on peripheral nerves—those outside the brain and the spinal cord—making it unlike opioids. Vertex says its new drug is expected to avoid opioids’ potential to lead to addiction.”

SPACE

Starlab—With Half the Volume of the ISS—Will Fit Inside Starship’s Payload Bay
Eric Berger | Ars Technica
“‘We looked at multiple launches to get Starlab into orbit, and eventually gravitated toward single launch options,’ [Voyager Space CTO Marshall Smith] said. ‘It saves a lot of the cost of development. It saves a lot of the cost of integration. We can get it all built and checked out on the ground, and tested and launch it with payloads and other systems. One of the many lessons we learned from the International Space Station is that building and integrating in space is very expensive.’ With a single launch on a Starship, the Starlab module should be ready for human habitation almost immediately, Smith said.”

FUTURE

9 Retrofuturistic Predictions That Came True
Maxwell Zeff | Gizmodo
“Commentators and reporters annually try to predict where technology will go, but many fail to get it right year after year. Who gets it right? More often than not, the world resembles the pop culture of the past’s vision for the future. Looking to retrofuturism, an old version of the future, can often predict where our advanced society will go.”

TECH

Can This AI-Powered Search Engine Replace Google? It Has for Me.
Kevin Roose | The New York Times
“Intrigued by the hype, I recently spent several weeks using Perplexity as my default search engine on both desktop and mobile. …Hundreds of searches later, I can report that even though Perplexity isn’t perfect, it’s very good. And while I’m not ready to break up with Google entirely, I’m now more convinced that AI-powered search engines like Perplexity could loosen Google’s grip on the search market, or at least force it to play catch-up.”

Image Credit: Dulcey Lima / Unsplash

These Technologies Could Axe 85% of CO2 Emissions From Heavy Industry

0

Heavy industry is one of the most stubbornly difficult areas of the economy to decarbonize. But new research suggests emissions could be reduced by up to 85 percent globally using a mixture of tried-and-tested and upcoming technologies.

While much of the climate debate focuses on areas like electricity, vehicle emissions, and aviation, a huge fraction of carbon emissions comes from hidden industrial processes. In 2022, the sector—which includes things like chemicals, iron and steel, and cement—accounted for a quarter of the world’s emissions, according to the International Energy Agency.

While they are often lumped together, these industries are very different, and the sources of their emissions can be highly varied. That means there’s no silver bullet and explains why the sector has proven to be one of the most challenging to decarbonize.

This prompted researchers from the UK to carry out a comprehensive survey of technologies that could help get the sector’s emissions under control. They found that solutions like carbon capture and storage, switching to hydrogen or biomass fuels, or electrification of key industrial processes could cut out the bulk of the heavy industry carbon footprint.

“Our findings represent a major step forward in helping to design industrial decarbonization strategies and that is a really encouraging prospect when it comes to the future health of the planet,”  Dr. Ahmed Gailani, from Leeds University, said in a press release.

The researchers analyzed sectors including iron and steel, chemicals, cement and lime, food and drink, pulp and paper, glass, aluminum, refining, and ceramics. They carried out an extensive survey of all the emissions-reducing technologies that had been proposed for each industry, both those that are well-established and emerging ones.

Across all sectors, they identified four key approaches that could help slash greenhouse gases—switching to low-carbon energy supplies like green hydrogen, renewable electricity, or biomass; using carbon capture and storage to mitigate emissions; modifying or replacing emissions-heavy industrial processes; and using less energy and raw materials to produce a product.

Electrification will likely be an important approach across a range of sectors, the authors found. In industries requiring moderate amounts of heat, natural gas boilers and ovens could be replaced with electric ones. Novel technologies like electric arc furnaces and electric steam crackers could help decarbonize the steel and chemicals industries, respectively, though these technologies are still immature.

Green hydrogen could also play a broad role, both as a fuel for heating and an ingredient in various industrial processes that currently rely on hydrogen derived from fossil fuels. Biomass similarly can be used for heating but could also provide more renewable feedstocks for plastic production.

Some industries, such as cement and chemicals, are particularly hard to tackle because carbon dioxide is produced directly by industrial processes rather than as a byproduct of energy needs. For these sectors, carbon capture and storage will likely be particularly important, say the authors.

In addition, they highlight a range of industry-specific alternative production routes that could make a major dent in emissions. Altogether, they estimate these technologies could slash the average emissions of heavy industry by up to 85 percent compared to the baseline.

It’s important to note that the research, which was reported in Joule, only analyzes the technical feasibility of these approaches. The team did not look into the economics or whether the necessary infrastructure was in place, which could have a big impact on how much of a difference they could really make.

“There are of course many other barriers to overcome,” said Gailani. “For example, if carbon capture and storage technologies are needed but the means to transport CO2 are not yet in place, this lack of infrastructure will delay the emissions reduction process. There is still a great amount of work to be done.”

Nonetheless, the research is the first comprehensive survey of what’s possible when it comes to decarbonizing industry. While bringing these ideas to fruition may take a lot of work, the study shows getting emissions from these sectors under control is entirely possible.

Image Credit: Marek Piwnicki / Unsplash

An AI Just Learned Language Through the Eyes and Ears of a Toddler

0

Sam was six months old when he first strapped a lightweight camera onto his forehead.

For the next year and a half, the camera captured snippets of his life. He crawled around the family’s pets, watched his parents cook, and cried on the front porch with grandma. All the while, the camera recorded everything he heard.

What sounds like a cute toddler home video is actually a daring concept: Can AI learn language like a child? The results could also reveal how children rapidly acquire language and concepts at an early age.

A new study in Science describes how researchers used Sam’s recordings to train an AI to understand language. With just a tiny portion of one child’s life experience over a year, the AI was able to grasp basic concepts—for example, a ball, a butterfly, or a bucket.

The AI, called Child’s View for Contrastive Learning (CVCL), roughly mimics how we learn as toddlers by matching sight to audio. It’s a very different approach than that taken by large language models like the ones behind ChatGPT or Bard. These models’ uncanny ability to craft essays, poetry, or even podcast scripts has thrilled the world. But they need to digest trillions of words from a wide variety of news articles, screenplays, and books to develop these skills.

Kids, by contrast, learn with far less input and rapidly generalize their learnings as they grow. Scientists have long wondered if AI can capture these abilities with everyday experiences alone.

“We show, for the first time, that a neural network trained on this developmentally realistic input from a single child can learn to link words to their visual counterparts,” study author Dr. Wai Keen Vong at NYU’s Center for Data Science said in a press release about the research.

Child’s Play

Children easily soak up words and their meanings from everyday experience.

At just six months old, they begin to connect words to what they’re seeing—for example, a round bouncy thing is a “ball.” By two years of age, they know roughly 300 words and their concepts.

Scientists have long debated how this happens. One theory says kids learn to match what they’re seeing to what they’re hearing. Another suggests language learning requires a broader experience of the world, such as social interaction and the ability to reason.

It’s hard to tease these ideas apart with traditional cognitive tests in toddlers. But we may get an answer by training an AI through the eyes and ears of a child.

M3GAN?

The new study tapped a rich video resource called SAYCam, which includes data collected from three kids between 6 and 32 months old using GoPro-like cameras strapped to their foreheads.

Twice every week, the cameras recorded around an hour of footage and audio as they nursed, crawled, and played. All audible dialogue was transcribed into “utterances”—words or sentences spoken before the speaker or conversation changes. The result is a wealth of multimedia data from the perspective of babies and toddlers.

For the new system, the team designed two neural networks with a “judge” to coordinate them. One translated first-person visuals into the whos and whats of a scene—is it a mom cooking? The other deciphered words and meanings from the audio recordings.

The two systems were then correlated in time so the AI learned to associate correct visuals with words. For example, the AI learned to match an image of a baby to the words “Look, there’s a baby” or an image of a yoga ball to “Wow, that is a big ball.” With training, it gradually learned to separate the concept of a yoga ball from a baby.

“This provides the model a clue as to which words should be associated with which objects,” said Vong.

The team then trained the AI on videos from roughly a year and a half of Sam’s life. Together, it amounted to over 600,000 video frames, paired with 37,500 transcribed utterances. Although the numbers sound large, they’re roughly just one percent of Sam’s daily waking life and peanuts compared to the amount of data used to train large language models.

Baby AI on the Rise

To test the system, the team adapted a common cognitive test used to measure children’s language abilities. They showed the AI four new images—a cat, a crib, a ball, and a lawn—and asked which one was the ball.

Overall, the AI picked the correct image around 62 percent of the time. The performance nearly matched a state-of-the-art algorithm trained on 400 million image and text pairs from the web—orders of magnitude more data than that used to train the AI in the study. They found that linking video images with audio was crucial. When the team shuffled video frames and their associated utterances, the model completely broke down.

The AI could also “think” outside the box and generalize to new situations.

In another test, it was trained on Sam’s perspective of a picture book as his parent said, “It’s a duck and a butterfly.” Later, he held up a toy butterfly when asked, “Can you do the butterfly?” When challenged with multicolored butterfly images—ones the AI had never seen before—it detected three out of four examples for “butterfly” with above 80 percent accuracy.

Not all word concepts scored the same. For instance, “spoon” was a struggle. But it’s worth pointing out that, like a tough reCAPTCHA, the training images were hard to decipher even for a human.

Growing Pains

The AI builds on recent advances in multimodal machine learning, which combines text, images, audio, or video to train a machine brain.

With input from just a single child’s experience, the algorithm was able to capture how words relate to each other and link words to images and concepts. It suggests that for toddlers hearing words and matching them to what they’re seeing helps build their vocabulary.

That’s not to say other brain processes, such as social cues and reasoning don’t come into play. Adding these components to the algorithm could potentially improve it, the authors wrote.

The team plans to continue the experiment. For now, the “baby” AI only learns from still image frames and has a vocabulary mostly comprised of nouns. Integrating video segments into the training could help the AI learn verbs because video includes movement.

Adding intonation to speech data could also help. Children learn early on that a mom’s “hmm” can have vastly different meanings depending on the tone.

But overall, combining AI and life experiences is a powerful new method to study both machine and human brains. It could help us develop new AI models that learn like children, and potentially reshape our understanding of how our brains learn language and concepts.

Image Credit: Wai Keen Vong

The First 3D Printer to Use Molten Metal in Space Is Headed to the ISS This Week

0

The Apollo 13 moon mission didn’t go as planned. After an explosion blew off part of the spacecraft, the astronauts spent a harrowing few days trying to get home. At one point, to keep the air breathable, the crew had to cobble together a converter for ill-fitting CO2 scrubbers with duct tape, space suit parts, and pages from a mission manual.

They didn’t make it to the moon, but Apollo 13 was a master class in hacking. It was also a grim reminder of just how alone astronauts are from the moment their spacecraft lifts off. There are no hardware stores in space (yet). So what fancy new tools will the next generation of space hackers use? The first 3D printer to make plastic parts arrived at the ISS a decade ago. This week, astronauts will take delivery of the first metal 3D printer. The machine should arrive at the ISS Thursday as part of the Cygnus NG-20 resupply mission.

The first 3D printer to print metal in space, pictured here, is headed to the ISS. Image Credit: ESA

Built by an Airbus-led team, the printer is about the size of a washing machine—small for metal 3D printers but big for space exploration—and uses high-powered lasers to liquefy metal alloys at temperatures of over 1,200 degrees Celsius (2,192 degrees Fahrenheit). Molten metal is deposited in layers to steadily build small (but hopefully useful) objects, like spare parts or tools.

Astronauts will install the 3D printer in the Columbus Laboratory on the ISS, where the team will conduct four test prints. They then plan to bring these objects home and compare their strength and integrity to prints completed under Earth gravity. They also hope the experiment demonstrates the process—which involves much higher temperatures than prior 3D printers and harmful fumes—is safe.

“The metal 3D printer will bring new on-orbit manufacturing capabilities, including the possibility to produce load-bearing structural parts that are more resilient than a plastic equivalent,” Gwenaëlle Aridon, a lead engineer at Airbus said in a press release. “Astronauts will be able to directly manufacture tools such as wrenches or mounting interfaces that could connect several parts together. The flexibility and rapid availability of 3D printing will greatly improve astronauts’ autonomy.”

One of four test prints planned for the ISS mission. Image Credit: Airbus Space and Defence SAS

Taking nearly two days per print job, the machine is hardly a speed demon, and the printed objects will be rough around the edges. Following the first demonstration of partial-gravity 3D printing on the ISS, the development of technologies suitable for orbital manufacturing has been slow. But as the ISS nears the end of its life and private space station and other infrastructure projects ramp up, the technology could find more uses.

The need to manufacture items on-demand will only grow the further we travel from home and the longer we stay there. The ISS is relatively nearby—a mere 200 miles overhead—but astronauts exploring and building a more permanent presence on the moon or Mars will need to repair and replace anything that breaks on their mission.

Ambitiously, and even further out, metal 3D printing could contribute to ESA’s vision of a “circular space economy,” in which material from old satellites, spent rocket stages, and other infrastructure is recycled into new structures, tools, and parts as needed.

Duct tape will no doubt always have a place in every space hacker’s box of tools—but a few 3D printers to whip up plastic and metal parts on the fly certainly won’t hurt the cause.

Image Credit: NASA

How Much Life Has Ever Existed on Earth, and How Much Ever Will?

0

All organisms are made of living cells. While it is difficult to pinpoint exactly when the first cells came to exist, geologists’ best estimates suggest at least as early as 3.8 billion years ago. But how much life has inhabited this planet since the first cell on Earth? And how much life will ever exist on Earth?

In our new study, published in Current Biology, my colleagues from the Weizmann Institute of Science and Smith College and I took aim at these big questions.

Carbon on Earth

Every year, about 200 billion tons of carbon is taken up through what is known as primary production. During primary production, inorganic carbon—such as carbon dioxide in the atmosphere and bicarbonate in the ocean—is used for energy and to build the organic molecules life needs.

Today, the most notable contributor to this effort is oxygenic photosynthesis, where sunlight and water are key ingredients. However, deciphering past rates of primary production has been a challenging task. In lieu of a time machine, scientists like myself rely on clues left in ancient sedimentary rocks to reconstruct past environments.

In the case of primary production, the isotopic composition of oxygen in the form of sulfate in ancient salt deposits allows for such estimates to be made.

In our study, we compiled all previous estimates of ancient primary production derived through the method above, as well as many others. The outcome of this productivity census was that we were able to estimate that 100 quintillion (or 100 billion billion) tons of carbon have been through primary production since the origin of life.

Big numbers like this are difficult to picture; 100 quintillion tons of carbon is about 100 times the amount of carbon contained within the Earth, a pretty impressive feat for Earth’s primary producers.

Primary Production

Today, primary production is mainly achieved by plants on land and marine micro-organisms such as algae and cyanobacteria. In the past, the proportion of these major contributors was very different; in the case of Earth’s earliest history, primary production was mainly conducted by an entirely different group of organisms that doesn’t rely on oxygenic photosynthesis to stay alive.

A combination of different techniques has been able to give a sense of when different primary producers were most active in Earth’s past. Examples of such techniques include identifying the oldest forests or using molecular fossils called biomarkers.

In our study, we used this information to explore what organisms have contributed the most to Earth’s historical primary production. We found that despite being late on the scene, land plants have likely contributed the most. However, it is also very plausible that cyanobacteria contributed the most.

green hair-like strands of bacteria
Filamentous cyanobacteria from a tidal pond at Little Sippewissett salt marsh, Falmouth, Mass. Image Credit: Argonne National Laboratory, CC BY-NC-SA

Total Life

By determining how much primary production has ever occurred, and by identifying what organisms have been responsible for it, we were also able to estimate how much life has ever been on Earth.

Today, one may be able to approximate how many humans exist based on how much food is consumed. Similarly, we were able to calibrate a ratio of primary production to how many cells exist in the modern environment.

Despite the large variability in the number of cells per organism and the sizes of different cells, such complications become secondary since single-celled microbes dominate global cell populations. In the end, we were able to estimate that about 1030 (10 noninillion) cells exist today, and between 1039 (a duodecillion) and 1040 cells have ever existed on Earth.

How Much Life Will Earth Ever Have?

Save for the ability to move Earth into the orbit of a younger star, the lifetime of Earth’s biosphere is limited. This morbid fact is a consequence of our star’s life cycle. Since its birth, the sun has slowly been getting brighter over the past four and half billion years as hydrogen has been converted to helium in its core.

Far in the future, about two billion years from now, all of the biogeochemical fail-safes that keep Earth habitable will be pushed past their limits. First, land plants will die off, and then eventually the oceans will boil, and the Earth will return to a largely lifeless rocky planet as it was in its infancy.

But until then, how much life will Earth house over its entire habitable lifetime? Projecting our current levels of primary productivity forward, we estimated that about 1040 cells will ever occupy the Earth.

a blue planet in space
A planetary system 100 light-years away in the constellation Dorado is home to the first Earth-size habitable-zone planet, discovered by NASA’s Transiting Exoplanet Survey Satellite. Image Credit: NASA Goddard Space Flight Center

Earth as an Exoplanet

Only a few decades ago, exoplanets (planets orbiting other stars) were just a hypothesis. Now we are able to not only detect them, but describe many aspects of thousands of far off worlds around distant stars.

But how does Earth compare to these bodies? In our new study, we have taken a birds eye view of life on Earth and have put forward Earth as a benchmark to compare other planets.

What I find truly interesting, however, is what could have happened in Earth’s past to produce a radically different trajectory and therefore a radically different amount of life that has been able to call Earth home. For example, what if oxygenic photosynthesis never took hold, or what if endosymbiosis never happened?

Answers to such questions are what will drive my laboratory at Carleton University over the coming years.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Mihály Köles / Unsplash 

AI Can Design Totally New Proteins From Scratch—It’s Time to Talk Biosecurity

0

Two decades ago, engineering designer proteins was a dream.

Now, thanks to AI, custom proteins are a dime a dozen. Made-to-order proteins often have specific shapes or components that give them abilities new to nature. From longer-lasting drugs and protein-based vaccines, to greener biofuels and plastic-eating proteins, the field is rapidly becoming a transformative technology.

Custom protein design depends on deep learning techniques. With large language models—the AI behind OpenAI’s blockbuster ChatGPT—dreaming up millions of structures beyond human imagination, the library of bioactive designer proteins is set to rapidly expand.

“It’s hugely empowering,” Dr. Neil King at the University of Washington recently told Nature. “Things that were impossible a year and a half ago—now you just do it.”

Yet with great power comes great responsibility. As newly designed proteins increasingly gain traction for use in medicine and bioengineering, scientists are now wondering: What happens if these technologies are used for nefarious purposes?

A recent essay in Science highlights the need for biosecurity for designer proteins. Similar to ongoing conversations about AI safety, the authors say it’s time to consider biosecurity risks and policies so custom proteins don’t go rogue.

The essay is penned by two experts in the field. One, Dr. David Baker, the director of the Institute for Protein Design at the University of Washington, led the development of RoseTTAFold—an algorithm that cracked the half-decade problem of decoding protein structure from its amino acid sequences alone. The other, Dr. George Church at Harvard Medical School, is a pioneer in genetic engineering and synthetic biology.

They suggest synthetic proteins need barcodes embedded into each new protein’s genetic sequence. If any of the designer proteins becomes a threat—say, potentially triggering a dangerous outbreak—its barcode would make it easy to trace back to its origin.

The system basically provides “an audit trail,” the duo write.

Worlds Collide

Designer proteins are inextricably tied to AI. So are potential biosecurity policies.

Over a decade ago, Baker’s lab used software to design and build a protein dubbed Top7. Proteins are made of building blocks called amino acids, each of which is encoded inside our DNA. Like beads on a string, amino acids are then twirled and wrinkled into specific 3D shapes, which often further mesh into sophisticated architectures that support the protein’s function.

Top7 couldn’t “talk” to natural cell components—it didn’t have any biological effects. But even then, the team concluded that designing new proteins makes it possible to explore “the large regions of the protein universe not yet observed in nature.”

Enter AI. Multiple strategies recently took off to design new proteins at supersonic speeds compared to traditional lab work.

One is structure-based AI similar to image-generating tools like DALL-E. These AI systems are trained on noisy data and learn to remove the noise to find realistic protein structures. Called diffusion models, they gradually learn protein structures that are compatible with biology.

Another strategy relies on large language models. Like ChatGPT, the algorithms rapidly find connections between protein “words” and distill these connections into a sort of biological grammar. The protein strands these models generate are likely to fold into structures the body can decipher. One example is ProtGPT2, which can engineer active proteins with shapes that could lead to new properties.

Digital to Physical

These AI protein-design programs are raising alarm bells. Proteins are the building blocks of life—changes could dramatically alter how cells respond to drugs, viruses, or other pathogens.

Last year, governments around the world announced plans to oversee AI safety. The technology wasn’t positioned as a threat. Instead, the legislators cautiously fleshed out policies that ensure research follows privacy laws and bolsters the economy, public health, and national defense. Leading the charge, the European Union agreed on the AI Act to limit the technology in certain domains.

Synthetic proteins weren’t directly called out in the regulations. That’s great news for making designer proteins, which could be kneecapped by overly restrictive regulation, write Baker and Church. However, new AI legislation is in the works, with the United Nation’s advisory body on AI set to share guidelines on international regulation in the middle of this year.

Because the AI systems used to make designer proteins are highly specialized, they may still fly under regulatory radars—if the field unites in a global effort to self-regulate.

At the 2023 AI Safety Summit, which did discuss AI-enabled protein design, experts agreed documenting each new protein’s underlying DNA is key. Like their natural counterparts, designer proteins are also built from genetic code. Logging all synthetic DNA sequences in a database could make it easier to spot red flags for potentially harmful designs—for example, if a new protein has structures similar to known pathogenic ones.

Biosecurity doesn’t squash data sharing. Collaboration is critical for science, but the authors acknowledge it’s still necessary to protect trade secrets. And like in AI, some designer proteins may be potentially useful but too dangerous to share openly.

One way around this conundrum is to directly add safety measures to the process of synthesis itself. For example, the authors suggest adding a barcode—made of random DNA letters—to each new genetic sequence. To build the protein, a synthesis machine searches its DNA sequence, and only when it finds the code will it begin to build the protein.

In other words, the original designers of the protein can choose who to share the synthesis with—or whether to share it at all—while still being able to describe their results in publications.

A barcode strategy that ties making new proteins to a synthesis machine would also amp up security and deter bad actors, making it difficult to recreate potentially dangerous products.

“If a new biological threat emerges anywhere in the world, the associated DNA sequences could be traced to their origins,” the authors wrote.

It will be a tough road. Designer protein safety will depend on global support from scientists, research institutions, and governments, the authors write. However, there have been previous successes. Global groups have established safety and sharing guidelines in other controversial fields, such as stem cell research, genetic engineering, brain implants, and AI. Although not always followed—CRISPR babies are a notorious example—for the most part these international guidelines have helped move cutting-edge research forward in a safe and equitable manner.

To Baker and Church, open discussions about biosecurity will not slow the field. Rather, it can rally different sectors and engage public discussion so custom protein design can further thrive.

Image Credit: University of Washington

This Week’s Awesome Tech Stories From Around the Web (Through January 27)

0

ARTIFICIAL INTELLIGENCE

New Theory Suggests Chatbots Can Understand Text
Anil Ananthaswamy | Quanta
“Artificial intelligence seems more powerful than ever, with chatbots like Bard and ChatGPT capable of producing uncannily humanlike text. But for all their talents, these bots still leave researchers wondering: Do such models actually understand what they are saying? ‘Clearly, some people believe they do,’ said the AI pioneer Geoff Hinton in a recent conversation with Andrew Ng, ‘and some people believe they are just stochastic parrots.’ …New research may have intimations of an answer.”

FUTURE

Etching AI Controls Into Silicon Could Keep Doomsday at Bay
Will Knight | Wired
“Even the cleverest, most cunning artificial intelligence algorithm will presumably have to obey the laws of silicon. Its capabilities will be constrained by the hardware that it’s running on. Some researchers are exploring ways to exploit that connection to limit the potential of AI systems to cause harm. The idea is to encode rules governing the training and deployment of advanced algorithms directly into the computer chips needed to run them.”

TECH

Google’s Hugging Face Deal Puts ‘Supercomputer’ Power Behind Open-Source AI
Emilia David | The Verge
Google Cloud’s new partnership with AI model repository Hugging Face is letting developers build, train, and deploy AI models without needing to pay for a Google Cloud subscription. Now, outside developers using Hugging Face’s platform will have ‘cost-effective’ access to Google’s tensor processing units (TPU) and GPU supercomputers, which will include thousands of Nvidia’s in-demand and export-restricted H100s.

INNOVATION

How Microsoft Catapulted to $3 Trillion on the Back of AI
Tom Dotan | The Wall Street Journal
“Microsoft on Thursday became the second company ever to end the trading day valued at more than $3 trillion, a milestone reflecting investor optimism that one of the oldest tech companies is leading an artificial-intelligence revolution. …One of [CEO Satya Nadella’s] biggest gambles in recent years has been partnering with an untested nonprofit startup—generative AI pioneer OpenAI—and quickly folding its technology into Microsoft’s bestselling products. That move made Microsoft a de facto leader in a burgeoning AI field many believe will retool the tech industry.”

SPACE

Hell Yeah, We’re Getting a Space-Based Gravitational Wave Observatory
Isaac Schultz | Gizmodo
“To put an interferometer in space would vastly reduce the noise encountered by ground-based instruments, and lengthening the arms of the observatory would allow scientists to collect data that is imperceptible on Earth. ‘Thanks to the huge distance traveled by the laser signals on LISA, and the superb stability of its instrumentation, we will probe gravitational waves of lower frequencies than is possible on Earth, uncovering events of a different scale, all the way back to the dawn of time,’ said Nora Lützgendorf, the lead project scientist for LISA, in an ESA release.”

ROBOTICS

General Purpose Humanoid Robots? Bill Gates Is a Believer
Brian Heater | TechCrunch
“The robotics industry loves a good, healthy debate. Of late, one of the most intense ones centers around humanoid robots. It’s been a big topic for decades, of course, but the recent proliferation of startups like 1X and Figure—along with projects from more established companies like Tesla—have put humanoids back in the spotlight. Humanoid robots can, however, now claim a big tech name among their ranks. Bill Gates this week issued a list of ‘cutting-edge robotics startups and labs that I’m excited about.’ Among the names are three companies focused on developing humanoids.”

CRYPTOCURRENCY

Is Cryptocurrency Like Stocks and Bonds? Courts Move Closer to an Answer.
Matthew Goldstein and David Yaffe-Bellany | The New York Times
“How the courts rule could determine whether the crypto industry can burrow deeper into the American financial system. If the SEC prevails, crypto supporters say, it will stifle the growth of a new and dynamic technology, pushing start-ups to move offshore. The government has countered that robust oversight is necessary to end the rampant fraud that cost investors billions of dollars when the crypto market imploded in 2022.”

ENERGY

Solid-State EV Batteries Now Face ‘Production Hell’
Charles J. Murray | IEEE Spectrum
“Producing battery packs that yield 800+ kilometers remains rough going. …’Solid-state is a great technology,’ noted Bob Galyen, owner of Galyen Energy LLC and former chief technology officer for the Chinese battery giant, Contemporary Amperex Technology Ltd (CATL). ‘But it’s going to be just like lithium-ion was in terms of the length of time it will take to hit the market. And lithium-ion took a long time to get there.'”

TECH

I Love My GPT, But I Can’t Find a Use for Anybody Else’s
Emilia David | The Verge
Though I’ve come to depend on my GPT, it’s the only one I use. It’s not fully integrated into my workflow either, because GPTs live in the ChatGPT Plus tab on my browser instead of inside a program like Google Docs. And honestly, if I wasn’t already paying for ChatGPT Plus, I’d be happy to keep Googling alternative terms. I don’t think I’ll be giving up ‘What’s Another Word For’ any time soon, but unless another hot GPT idea strikes me, I’m still not sure what they’re good for—at least in my job.

Image Credit: Jonny Caspari / Unsplash

These Engineered Muscle Cells Could Slash the Cost of Lab-Grown Meat

0

Lab-grown meat could present a kinder and potentially greener alternative to current livestock farming. New specially engineered meat cells could finally bring costs down to a practical level.

While the idea of growing meat in the lab rather than the field would have sounded like sci-fi a decade ago, today there are a bevy of startups vying to bring so-called “cultivated meat” to everyday shops and restaurants.

The big sell is that the technology will allow us to enjoy meat without having to worry about the murky ethics of industrial-scale animal agriculture. There are also more contentious claims that producing meat this way will significantly reduce its impact on the environment.

Both points are likely to appeal to increasingly conscientious consumers. The kicker is that producing meat in a lab currently costs far more than conventional farming, which means that so far these products have only appeared in high-end restaurants.

New research from Tufts University could help change that. The researchers have engineered cow muscle cells to produce one of cultivated meat’s most expensive ingredients by themselves, potentially slashing production costs.

“Products have already been awarded regulatory approval for consumption in the US and globally, although costs and availability remain limiting,” says David Kaplan, from Tufts, who led the research. “I think advances like this will bring us much closer to seeing affordable cultivated meat in our local supermarkets within the next few years.”

The ingredient in question is known as growth factor—a kind of signaling protein that stimulates cells to grow and differentiate into other cell types. When growing cells outside the body these proteins need to be introduced artificially to the medium the culture is growing in to get the cells to proliferate.

But growth factors are extremely expensive and must be sourced by specialist industrial suppliers that normally cater to researchers and the drug industry. The authors say that these ingredients can account for as much as 90 percent of the cost of cultured meat production.

So, they decided to genetically engineer cow muscle cells—the key ingredient in cultivated beef—to produce growth factors themselves, removing the need to include them in the growth media. In a paper in Cell Reports Sustainability, they describe how they managed to get the cells to produce fibroblast growth factor (FGF), one of the most critical of these important signaling proteins and a significant contributor to the cost of a cultured meat medium the authors included in the study.

Crucially, the researchers did this by editing native genes and dialing their expression up and down, rather than introducing foreign genetic material. That will be important for ultimate regulatory approval, says Andrew Stout, who helped lead the project, because rules are more stringent when genes are transplanted from one species to another.

The approach will still require some work before it’s ready for commercial use, however. The researchers report the engineered cells did grow in the absence of external FGF but at a slower rate. They expect to overcome this by tweaking the timing or levels of FGF production.

And although it’s one of the costliest, FGF isn’t the only growth factor required for lab-grown meat. Whether similar approaches could also cut other growth factors out of the ingredient list remains to be seen.

These products face barriers that go beyond cost as well. Most products so far have focused on things like burgers and chicken nuggets that are made of ground meat. That’s because the complex distribution of tissues like fat, bone, and sinew that you might find in a steak or a fillet of fish are incredibly tough to recreate in the lab.

But if approaches like this one can start to bring the cost of lab-grown meat down to competitive levels, consumers may be willing to trade a little bit of taste and texture for a clear conscience.

Image Credit: Screenroad / Unsplash

A Child Born Deaf Can Hear for the First Time Thanks to Pioneering Gene Therapy

0

When Aissam Dam had the strange device connected to his ear, he had no idea it was going to change his life.

An 11-year-old boy, Aissam was born deaf due to a single gene mutation. In October 2023, he became the first person in the US to receive a gene therapy that added a healthy version of the mutated gene into his inner ear. Within four weeks, he began to hear sounds.

Four months later, his perception of the world had broadened beyond imagination. For the first time, he heard the buzzing of traffic, learned the timbre of his father’s voice, and wondered at the snipping sound scissors made during a haircut.

Aissam is participating in an ongoing clinical trial testing a one-time gene therapy to restore hearing in kids like him. Due to a mutation in a gene called otoferlin, the children are born deaf and often require hearing aids from birth. The trial is a collaboration between the Children’s Hospital of Philadelphia and Akouos, a subsidiary of the pharmaceutical giant Eli Lilly.

“Gene therapy for hearing loss is something physicians and scientists around the world have been working toward for over 20 years,” said Dr. John Germiller at the Children’s Hospital of Philadelphia, who administered the drug to Aissam, in a press release. “These initial results show that it may restore hearing better than many thought possible.”

While previously tested in mice and non-human primates, the team didn’t know if the therapy would work for Aissam. Even if it did work, they were unsure how it would affect the life of a deaf young adult—essentially introducing him to an entirely new sensory world.

They didn’t have to worry. “There’s no sound I don’t like…they’re all good,” Aissam said to the New York Times.

A Broken Bridge

Hearing isn’t just about picking up sounds, it’s also about translating sound waves into electrical signals our brains can perceive and understand.

At the core of this process is the cochlea, a snail-like structure buried deep inside the inner ear that translates sound waves into electrical signals that are then sent to the brain.

The cochlea is a bit like a roll-up piano keyboard. The structure is lined with over 3,500 wiggly, finger-shaped hairs. Like individual piano keys, each hair cell is tuned to a note. The cells respond when they detect their preferred sound frequency, sending electrical pulses to the auditory parts of the brain. This allows us to perceive sounds, conversations, and music.

For Aissam and over 200,000 people worldwide, these hair cells are unable to communicate with the brain from birth due a mutation in a gene called otoferlin. Otoferlin is a bridge. It enables the hair cells lining the cochlea to send chemical messages to nearby nerve fibers, activating signals to the brain. The mutated gene cuts the phone line, leading to deafness.

Hearing Helper

In the clinical trial, scientists hoped to restore the connection between inner-ear cells and the brain using a gene therapy to add a dose of otoferlin directly into the inner ear.

This was not straightforward. Otoferlin is a very large gene, making it difficult to directly inject into the body. In the new trial, the team cleverly broke the gene into two chunks. Each chunk was inserted into a safe viral carrier and shuttled into the hair cells. Once inside the body, the inner-ear cells stitched the two parts back into a working otoferlin gene.

Developing therapies for the inner ear is delicate work. The organ uses a matrix of tissues and liquids to detect different notes and tones. Tweaks can easily alter our perception of sound.

Here, the team carefully engineered a device to inject the therapy into a small liquid-filled nook in the cochlea. From there, the liquid gene therapy could float down the entire length of the cochlea, bathing every inner hair in the treatment.

In mice, the treatment amped up otoferlin levels. In a month, the critters were able to hear with minimal side effects. Another test in non-human primates found similar effects. The therapy slightly altered liver and spleen functions, but its main effects were in the inner ear.

A major hiccup in treating the inner ear is pressure. You’ve likely experienced this—a quick ascent on a flight or a deep dive into the ocean makes the ears pop. Injecting liquids into the inner ear can similarly disrupt things. The team carefully scaled the dose of the treatment in mice and non-human primates and made a tiny vent so the therapy could reach the whole cochlea.

Assessing non-human primates a month after treatment, the team didn’t detect signs of the gene therapy in their blood, saliva, or nasal swab samples—confirming the treatment was tailored to the inner ear as hoped and, potentially, had minimal side effects.

A Path Forward

The trial is one of five gene therapy studies tackling inherited deafness.

In October last year, a team in China gave five children with otoferlin genetic defects a healthy version of the gene. In a few months, a six-year-old girl, Yiyi, was able to hear sounds at roughly the volume of a whisper, according to MIT Technology Review.

The gene therapy isn’t for everyone with hearing loss. Otoferlin mutations make up about three percent of cases of inherited deafness. Most children with the mutation don’t completely lose their hearing and are given cochlear implants to compensate at an early age. It’s still unclear if the treatment also helps improve their hearing. However, a similar strategy could potentially be used for others with genetic hearing disorders.

For Yiyi and Aissam, who never had cochlear implants, the gene therapy is a life-changer. Sounds were terrifying at first. Yiyi heard traffic noises as she slept at night for the first time, saying it’s “too noisy.” Aissam is still learning to incorporate the new experience into his everyday life—a bit like learning a new superpower. His favorite sounds? “People,” he said through sign language.

Image Credit: tung256Pixabay

Dreams May Have Played a Crucial Role in Our Evolutionary Success as a Species

0

Have you ever woken from a dream, emotionally laden with anxiety, fear, or a sense of unpreparedness? Typically, these kinds of dreams are associated with content like losing one’s voice, teeth falling out, or being chased by a threatening being.

But one question I’ve always been interested in is whether or not these kinds of dreams are experienced globally across many cultures. And if some features of dreaming are universal, could they have enhanced the likelihood of our ancestors surviving the evolutionary game of life?

My research focuses on the distinctive characteristics that make humans the most successful species on Earth. I’ve explored the question of human uniqueness by comparing Homo sapiens with various animals, including chimpanzees, gorillas, orangutans, lemurs, wolves, and dogs. Recently, I’ve been part of a team of collaborators that has focused our energies on working with small-scale societies known as hunter-gatherers.

We wanted to explore how the content and emotional function of dreams might vary across different cultural contexts. By comparing dreams from forager communities in Africa to those from Western societies, we wanted to understand how cultural and environmental factors shape the way people dream.

Comparative Dream Research

As part of this research, published in Nature Scientific Reports, my colleagues and I worked closely for several months with the BaYaka in the Democratic Republic of Congo and the Hadza in Tanzania to record their dreams. For Western dreamers, we recorded dream journals and detailed dream accounts, collected between 2014 and 2022, from people living in Switzerland, Belgium, and Canada.

The Hadza of Tanzania and the BaYaka of Congo fill a crucial, underexplored gap for dream research due to their distinct lifestyle. Their egalitarian culture, emphasizing equality and cooperation, is vital for survival, social cohesion, and well-being. These forager communities rely heavily on supportive relationships and communal sharing of resources.

Higher mortality rates due to disease, intergroup conflict, and challenging physical environments in these communities (without the kind of social safety nets common to post-industrial societies in the West) means they rely on face-to-face relationships for survival in a way that is a distinct feature of forager life.

Dreaming Across Cultures

While studying these dreams, we began to notice a common theme. We’ve discovered that dreams play out much differently across different socio-cultural environments. We used a new software tool to map dream content that connects important psychosocial constructs and theories with words, phrases, and other linguistic constructions. That gave us an understanding about the kinds of dreams people were having. And we could model these statistically to test scientific hypotheses as to the nature of dreams.

The dreams of the BaYaka and Hadza were rich in community-oriented content, reflecting the strong social bonds inherent in their societies. This was in stark contrast to the themes prevalent in dreams from Western societies, where negative emotions and anxiety were more common.

Interestingly, while dreams from these forager communities often began with threats reflecting the real dangers they face daily, they frequently concluded with resolutions involving social support. This pattern suggests that dreams might play a crucial role in emotional regulation, transforming threats into manageable situations and reducing anxiety.

Here is an example of a Hadza dream laden with emotionally threatening content:

“I dreamt I fell into a well that is near the Hukumako area by the Dtoga people. I was with two others and one of my friends helped me get out of the well.”

Notice that the resolution to the dream challenges incorporated a social solution as an answer to the problem. Now, contrast this to the nightmare-disorder-diagnosed dreamers from Europe. They had scarier, open-ended narratives with less positive dream resolutions. Specifically, we found they had higher levels of dream content with negative emotions compared to the “normal” controls. Conversely, the Hadza exhibited significantly fewer negative emotions in their dreams. These are the kind of nightmares reported:

“My mom would call me on my phone and ask me to put it on speakerphone so my sister and cousin could hear. Crying she announced to us that my little brother was dead. I was screaming in sadness and crying in pain.”

“I was with my boyfriend, our relationship was perfect and I felt completely fulfilled. Then he decided to abandon me, which awoke in me a deep feeling of despair and anguish.”

The Functional Role of Dreams

Dreams are wonderfully varied. But what if one of the keys to humanity’s success as a species rests in our dreams? What if something was happening in our dreams that improved the survival and reproductive efforts of our Paleolithic ancestors?

A curious note from my comparative work, of all the primates alive, humans sleep the least, but we have the most REM. Why was REM—the state most often associated with dreams—so protected while evolution was whittling away our sleep? Perhaps something embedded in dreaming itself was prophylactic for our species?

Our research supports previous notions that dreams are not just random firings of a sleeping brain but may play a functional role in our emotional well-being and social cognition. They reflect the challenges and values of our waking life, offering insights into how we process emotions and threats. In forager societies, dreams often conclude with resolutions involving social support, suggesting that dreams might serve as a psychological mechanism for reinforcing social bonds and community values.

Why Dream?

The ultimate purpose of dreaming is still a subject of ongoing research and debate. Yet these themes seem to harbor within them universals that hint at some crucial survival function.

Some theories suggest that dreaming acts like a kind of virtual reality that serves to simulate threatening or social situations, helping individuals prepare for real-life challenges.

If this is indeed the case, then it’s possible that the dreams of our ancestors, who roamed the world in the distant Paleolithic era, played a crucial role in enhancing the cooperation that contributed to their survival.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Johannes Plenio / Unsplash

Scientists Coax Bacteria Into Making Exotic Proteins Not Found in Nature

0

Nature has a set recipe for making proteins.

Triplets of DNA letters translate into 20 molecules called amino acids. These basic building blocks are then variously strung together into the dizzying array of proteins that makes up all living things. Proteins form body tissues, revitalize them when damaged, and direct the intricate processes keeping our bodies’ inner workings running like well-oiled machines.

Studying the structure and activity of proteins can shed light on disease, propel drug development, and help us understand complex biological processes, such as those at work in the brain or aging. Proteins are becoming essential in non-biological contexts too, like for example, in the manufacturing of climate-friendly biofuels.

Yet with only 20 molecular building blocks, evolution essentially put a limit on what proteins can do. So, what if we could expand nature’s vocabulary?

By engineering new amino acids not seen in nature and incorporating them into living cells, exotic proteins could do more. For example, adding synthetic amino acids to protein-based drugs—such as those for immunotherapy—could slightly tweak their structure so they last longer in the body and are more effective. Novel proteins also open the door to new chemical reactions that chew up plastics or more easily degradable materials with different properties.

But there’s a problem. Exotic amino acids aren’t always compatible with a cell’s machinery.

A new study in Nature, led by synthetic biology expert Dr. Jason Chin at the Medical Research Council Laboratory of Molecular Biology in Cambridge, UK, brought the dream a bit closer. Using a newly developed molecular screen, they found and inserted four exotic amino acids into a protein inside bacteria cells. An industrial favorite for churning out insulin and other protein-based medications, the bacteria readily accepted the exotic building blocks as their own.

All the newly added components are different from the cell’s natural ones, meaning the additions didn’t interfere with the cell’s normal functions.

“It’s a big accomplishment to get these new categories of amino acids into proteins,” Dr. Chang Liu at the University of California, Irvine who was not part of the study, told Science.

A Synthetic Deadlock

Adding exotic amino acids into a living thing is a nightmare.

Picture the cell as a city, with multiple “districts” performing their own functions. The nucleus, shaped like the pit of an apricot, houses our genetic blueprint recorded in DNA. Outside the nucleus, protein-making factories called ribosomes churn away. Meanwhile, RNA messengers buzz between the two like high-speed trains shuttling genetic information to be made into proteins.

Like DNA, RNA has four molecular letters. Each three-letter combination forms a “word” encoding an amino acid. The ribosome reads each word and summons the associated amino acid to the factory using transfer RNA (tRNA) molecules to grab onto them.

The tRNA molecules are tailormade to pick up particular amino acids with a kind of highly specific protein “glue.” Once shuttled into the ribosome, the amino acid is plucked off its carrier molecule and stitched into an amino acid string that curls into intricate protein shapes.

Clearly, evolution has established a sophisticated system for the manufacture of proteins. Not surprisingly, adding synthetic components isn’t straightforward.

Back in the 1980s, scientists found a way to attach synthetic amino acids to a carrier inside a test tube. More recently, they’ve incorporated unnatural amino acids into proteins inside bacteria cells by hijacking their own inner factories without affecting normal cell function.

Beyond bacteria, Chin and colleagues previously hacked tRNA and its corresponding “glue”—called tRNA synthetase—to add an exotic protein into mouse brain cells.

Rewiring the cell’s protein building machinery, without breaking it, takes a delicate balance. The cell needs modified tRNA carriers to grab new amino acids and drag them to the ribosome. The ribosome then must recognize the synthetic amino acid as its own and stitch it into a functional protein. If either step stumbles, the engineered biological system fails.

Expanding the Genetic Code

The new study focused on the first step—engineering better carriers for exotic amino acids.

The team first mutated genes for the “glue” protein and generated millions of potential alternative versions. Each of these variants could potentially grab onto exotic buildings blocks.

To narrow the field, they turned to tRNA molecules, the carriers of amino acids. Each tRNA carrier was tagged with a bit of genetic code that attached to mutated “glue” proteins like a fishing hook. The effort found eight promising pairs out of millions of potential structures. Another screen zeroed in on a group of “glue” proteins that could grab onto multiple types of artificial protein building blocks—including those highly different from natural ones.

The team then inserted genes encoding these proteins into Escherichia coli bacteria cells, a favorite for testing synthetic biology recipes.

Overall, eight “glue” proteins successfully loaded exotic amino acids into the bacteria’s natural protein-making machinery. Many of the synthetic building blocks had strange backbone structures not generally compatible with natural ribosomes. But with the help of engineered tRNA and “glue” proteins, the ribosomes incorporated four exotic amino acids into new proteins.

The results “expand the chemical scope of the genetic code” for making new types of materials, the team explained in their paper.

A Whole New World

Scientists have already found hundreds of exotic amino acids. AI models such as AlphaFold or RoseTTAFold, and their variations, are likely to spawn even more. Finding carriers and “glue” proteins that match has always been a roadblock.

The new study establishes a method to speed up the search for new designer proteins with unusual properties. For now, the method can only incorporate four synthetic amino acids. But scientists are already envisioning uses for them.

Protein drugs made from these exotic amino acids are shaped differently than their natural counterparts, protecting them from decay inside the body. This means they last longer, and it lessens the need for multiple doses. A similar system could churn out new materials such as biodegradable plastic which, similar to proteins, also relies on stitching individual components together.

For now, the technology relies on the ribosome’s tolerance of exotic amino acids—which can be unpredictable. Next, the team wants to modify the ribosome itself to better tolerate strange amino acids and their carriers. They’re also looking to create protein-like materials made completely of synthetic amino acids, which could augment the function of living tissues.

“If you could encode the expanded set of building blocks in the same way that we can proteins, then we could turn cells into living factories for the encoded synthesis of polymers for everything from new drugs to materials,” said Chin in an earlier interview. “It’s a super-exciting field.”

Image Credit: National Institute of Allergy and Infectious Diseases, National Institutes of Health

IMF Says AI Will Upend Jobs and Boost Inequality. MIT CSAIL Says Not Fast.

0

The impact that AI could have on the economy is a hot topic following rapid advances in the technology. But two recent reports present conflicting pictures of what this could mean for jobs.

Ever since a landmark 2013 study from Oxford University researchers predicted that 47 percent of US jobs were at risk of computerization, the prospect that rapidly improving AI could cause widespread unemployment has been front and center in debates around the technology.

Reports forecasting which tasks, which professions, and which countries are most at risk have been a dime a dozen. But two recent studies from prominent institutions that reach very different conclusions are worth noting.

Last week, researchers at the International Monetary Fund suggested that as many as 40 percent of jobs worldwide could be impacted by AI, and the technology will most likely worsen inequality. But today, a study from MIT CSAIL noted that just because AI can do a job doesn’t mean it makes economic sense, and therefore, the rollout is likely to be slower than many expect.

The IMF analysis follows a similar approach to many previous studies by examining the “AI exposure” of various jobs. This involves breaking jobs down into a bundle of tasks and assessing which ones could potentially be replaced by AI. The study goes a step further though, considering which jobs are likely to be shielded from AI’s effects. For instance, many of a judge’s tasks are likely to be automatable, but society is unlikely to be comfortable delegating this kind of job to AI.

The study found that roughly 40 percent of jobs globally are exposed to AI. But the authors predict that advanced economies could see an even greater impact, with nearly 60 percent of jobs being upended by the technology. While around half of affected jobs are likely to see AI enhance the work of humans, the other half could see AI replacing tasks, leading to lower wages and reduced hiring.

In emerging markets and low-income countries, the figures are 40 percent and 26 percent, respectively. But while that could protect them from some of the destabilizing effects on the job market, it also means these economies are less able to reap the benefits of AI, potentially leading to increasing inequality at a global scale.

Similar dynamics are likely to play out within countries as well, according to the analysis, with some able to harness AI to boost their productivity and wages while others lose out. In particular, the researchers suggest that older workers are likely to struggle to adapt to the new AI-powered economy.

While the report provides a mixture of positive and negative news, in most of the scenarios considered AI seems likely to worsen inequality, the authors say. This means that policymakers need to start planning now for the potential impact, including by beefing up social safety nets and retraining programs.

The study from MIT CSAIL paints a different picture though. The authors take issue with the standard approach of measuring AI exposure, because they say it doesn’t take account of the economic or technical feasibility of replacing tasks carried out by humans with AI.

They point to the hypothetical example of a bakery considering whether to invest in computer vision technology to check ingredients for quantity and spoilage. While technically feasible, this task only accounts for roughly six percent of a bakers’ duties. In a small bakery with five bakers earning a typical salary of $48,000, this could potentially save the company $14,000 per year, clearly far less than the cost of developing and deploying the technology.

That prompted them to take a more economically grounded approach to assessing AI’s potential impact on the job market. First, they carried out surveys with workers to understand what performance would be required of an AI system. They then modeled the cost of building a system that could live up to those metrics, before using this to work out whether automation would be attractive in that scenario.

They focused on computer vision, as cost models are more developed for this branch of AI. They found that the large upfront cost of deploying AI meant that only 23 percent of work supposedly “exposed” to AI would actually make sense to automate. While that’s not insignificant, they say it would translate to a much slower rollout of the technology than others have predicted, suggesting that job displacement will be gradual and easier to deal with.

Obviously, most of the focus these days is on the job destroying potential of large language models rather than computer vision systems. But despite their more general nature, the researchers say that these models will still need to be fine-tuned for specific jobs (at some expense) and so they expect the economics to be comparable.

Ultimately, who is right is hard to say right now. But it seems prudent to prepare for the worst while simultaneously trying to better understand what the true impact of this disruptive technology could be.

Image Credit: Mohamed Nohassi / Unsplash

This Week’s Awesome Tech Stories From Around the Web (Through January 20)

0

ARTIFICIAL INTELLIGENCE

Mark Zuckerberg’s New Goal Is Creating Artificial General Intelligence
Alex Heath | The Verge
“Fueling the generative AI craze is a belief that the tech industry is on a path to achieving superhuman, god-like intelligence. OpenAI’s stated mission is to create this artificial general intelligence, or AGI. Demis Hassabis, the leader of Google’s AI efforts, has the same goal. Now, Meta CEO Mark Zuckerberg is entering the race.”

ROBOTICS

Why Everyone’s Excited About Household Robots Again
Melissa Heikkiläarchive page | MIT Technology Review
“Robotics is at an inflection point, says Chelsea Finn, an assistant professor at Stanford University, who was an advisor for the [Mobile ALOHA] project. In the past, researchers have been constrained by the amount of data they can train robots on. Now there is a lot more data available, and work like Mobile ALOHA shows that with neural networks and more data, robots can learn complex tasks fairly quickly and easily, she says.”

ENERGY

Global Emissions Could Peak Sooner Than You Think
Hannah Ritchie | Wired
“Every November, the Global Carbon Project publishes the year’s global CO2 emissions. It’s never good news. At a time when the world needs to be reducing emissions, the numbers continue to climb. However, while emissions have been moving in the wrong direction, many of the underpinning economic forces that drive them have been going the right way. This could well be the year when these various forces push hard enough to finally tip the balance.”

BIOTECH

Meet ReTro, the First Cloned Rhesus Monkey to Reach Adulthood
Miryam Naddaf | Nature Magazine
“For the first time, a cloned rhesus monkey (Macaca mulatta) has lived into adulthood—surviving for more than two years so far. The feat, described [this week] in Nature Communications, marks the first successful cloning of the species. It was achieved using a slightly different approach from the conventional technique that was used to clone Dolly the sheep and other mammals, including long-tailed macaques (Macaca fascicularis), the first primates to be cloned.”

VIRTUAL REALITY

I Literally Spoke With Nvidia’s AI-Powered Video Game NPCs
Sean Hollister | The Verge
“What if you could just… speak…to video game characters? Ask your own questions, with your own voice, instead of picking from preset phrases? Last May, Nvidia and its partner Convai showed off a fairly unconvincing canned demo of such a system—but this January, I got to try a fully interactive version for myself at CES 2024. I walked away convinced we’ll inevitably see something like this in future games.”

FUTURE

What Does Ukraine’s Million-Drone Army Mean for the Future of War?
David Hambling | New Scientist
“Ukraine’s president Volodymyr Zelensky has promised that in 2024 the country’s military will have a million drones. His nation already deploys hundreds of thousands of small drones, but this is a major change—a transition to a military with more drones than soldiers. What does that mean for the future of war?”

SPACE

Japan Reaches the Moon, but the Fate of Its Precision Lander Is Uncertain
Jonathan O’Callaghan | Scientific American
“…JAXA officials revealed that although SLIM is in contact with mission controllers and accurately responding to commands, the lander’s solar panels are not generating power, and much of the gathered data onboard the spacecraft have yet to be returned to Earth. The mission is consequently operating on batteries, which have the capacity to power its operations for several hours. After SLIM drains its batteries, its operations will cease—but the spacecraft may reawaken if its solar power supply can be restored.”

TRANSPORTATION

NASA Unveils X-59 Plane to Test Supersonic Flight Over US Cities
Matthew Sparkes | New Scientist
“‘Concorde’s sound would have been like thunder right overhead or a balloon popping right next to you, whereas our sound will be more of a thump or a rumble, more consistent with distant thunder or your neighbor’s car door down the street being closed,’ says Bahm. ‘We think that it’ll more blend into the background of everyday life than the Concorde did.'”

AUTOMATION

NASA’s Robotic, Self-Assembling Structures Could Be the Next Phase of Space Construction
Devin Coldewey | TechCrunch
“Bad news if you want to move to the moon or Mars: housing is a little hard to come by. Fortunately, NASA (as always) is thinking ahead, and has just shown off a self-assembling robotic structure that might just be a crucial part of moving off-planet. …The basic idea of the self-building structure is in a clever synergy between the building material—cuboctahedral frames they call voxels—and the two types of robots that assemble them.”

Image Credit: ZENG YILI / Unsplash

Mac at 40: Apple’s Love Affair With User Experience Sparked a Tech Revolution

0

Technology innovation requires solving hard technical problems, right? Well, yes. And no. As the Apple Macintosh turns 40, what began as Apple prioritizing the squishy concept of “user experience” in its 1984 flagship product is, today, clearly vindicated by its blockbuster products since.

It turns out that designing for usability, efficiency, accessibility, elegance, and delight pays off. Apple’s market capitalization is now over $2.8 trillion, and its brand is every bit associated with the term “design” as the best New York or Milan fashion houses are. Apple turned technology into fashion, and it did it through user experience.

It began with the Macintosh.

When Apple announced the Macintosh personal computer with a Super Bowl XVIII television ad on Jan. 22, 1984, it more resembled a movie premiere than a technology release. The commercial was, in fact, directed by filmmaker Ridley Scott. That’s because founder Steve Jobs knew he was not selling just computing power, storage or a desktop publishing solution. Rather, Jobs was selling a product for human beings to use, one to be taken into their homes and integrated into their lives.

Apple’s 1984 Super Bowl commercial is as iconic as the product it introduced.

This was not about computing anymore. IBM, Commodore, and Tandy did computers. As a human-computer interaction scholar, I believe that the first Macintosh was about humans feeling comfortable with a new extension of themselves, not as computer hobbyists but as everyday people. All that “computer stuff”—circuits and wires and separate motherboards and monitors—were neatly packaged and hidden away within one sleek integrated box.

You weren’t supposed to dig into that box, and you didn’t need to dig into that box—not with the Macintosh. The everyday user wouldn’t think about the contents of that box any more than they thought about the stitching in their clothes. Instead, they would focus on how that box made them feel.

Beyond the Mouse and Desktop Metaphor

As computers go, was the Macintosh innovative? Sure. But not for any particular computing breakthrough. The Macintosh was not the first computer to have a graphical user interface or employ the desktop metaphor: icons, files, folders, windows, and so on. The Macintosh was not the first personal computer meant for home, office, or educational use. It was not the first computer to use a mouse. It was not even the first computer from Apple to be or have any of these things. The Apple Lisa, released a year before, had them all.

It was not any one technical thing that the Macintosh did first. But the Macintosh brought together numerous advances that were about giving people an accessory—not for geeks or techno-hobbyists, but for home office moms and soccer dads and eighth grade students who used it to write documents, edit spreadsheets, make drawings, and play games. The Macintosh revolutionized the personal computing industry and everything that was to follow because of its emphasis on providing a satisfying, simplified user experience.

Where computers typically had complex input sequences in the form of typed commands (Unix, MS-DOS) or multi-button mice (Xerox STAR, Commodore 64), the Macintosh used a desktop metaphor in which the computer screen presented a representation of a physical desk surface. Users could click directly on files and folders on the desktop to open them. It also had a one-button mouse that allowed users to click, double click, and drag and drop icons without typing commands.

The Xerox Alto had first exhibited the concept of icons, invented in David Canfield Smith’s 1975 PhD dissertation. The 1981 Xerox Star and 1983 Apple Lisa had used desktop metaphors. But these systems had been slow to operate and still cumbersome in many aspects of their interaction design.

The Macintosh simplified the interaction techniques required to operate a computer and improved functioning to reasonable speeds. Complex keyboard commands and dedicated keys were replaced with point-and-click operations, pull-down menus, draggable windows and icons, and systemwide undo, cut, copy, and paste. Unlike with the Lisa, the Macintosh could run only one program at a time, but this simplified the user experience.

Apple CEO Steve Jobs introduced the Macintosh in 1984.

The Macintosh also provided a user interface toolbox for application developers, enabling applications to have a standard look and feel by using common interface widgets such as buttons, menus, fonts, dialog boxes, and windows. With the Macintosh, the learning curve for users was flattened, allowing people to feel proficient in short order. Computing, like clothing, was now for everyone.

A Good Experience

Although I hesitate to use the cliches “natural” or “intuitive” when it comes to fabricated worlds on a screen—nobody is born knowing what a desktop window, pull-down menu, or double click is—the Macintosh was the first personal computer to make user experience the driver of technical achievement. It indeed was simple to operate, especially compared with command-line computers at the time.

Whereas prior systems prioritized technical capability, the Macintosh was intended for nonspecialist users—at work, school, or in the home—to experience a kind of out-of-the-box usability that today is the hallmark of not only most Apple products but an entire industry’s worth of consumer electronics, smart devices, and computers of every kind.

According to Market Growth Reports, companies devoted to providing user experience tools and services were worth $548.91 million in 2023 and are expected to reach $1.36 billion by 2029. User experience companies provide software and services to support usability testing, user research, voice-of-the-customer initiatives, and user interface design, among many other user experience activities.

Rarely today do consumer products succeed in the market based on functionality alone. Consumers expect a good user experience and will pay a premium for it. The Macintosh started that obsession and demonstrated its centrality.

It is ironic that the Macintosh technology being commemorated in January 2024 was never really about technology at all. It was always about people. This is inspiration for those looking to make the next technology breakthrough, and a warning to those who would dismiss the user experience as only of secondary concern in technological innovation.

Author disclosure statement: I have had two PhD students receive Apple PhD AI/ML Fellowships. This funding does not support me personally, but supports two of the PhD students that I have advised. They obtained these fellowships through competitive submissions to Apple based on an open solicitation.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: The original Macintosh computer may seem quaint today, but the way users interacted with it triggered a revolution 40 years ago. Mark Mathosian/Flickr, CC BY-NC-SA

Why What We Decide to Name New Technologies Is So Crucial

0

Back in 2017, my editor published an article titled “The Next Great Computer Interface Is Emerging—But It Doesn’t Have a Name Yet.” Seven years later, which may as well be a hundred in technology years, that headline hasn’t aged a day.

Last week, UploadVR broke the news that Apple won’t allow developers for their upcoming Vision Pro headset to refer to applications as VR, AR, MR, or XR. For the past decade, the industry has variously used terms like virtual reality (VR), augmented reality (AR), mixed reality (MR), and extended reality (XR) to describe technologies that include things like VR headsets. Apple, however, is making it clear that developers should refer to their apps as “spatial” or use the term “spatial computing.” They’re also asking developers not to refer to the device as a headset (whoops). Apple calls it a “spatial computer,” and VR mode is simply “fully immersive.”

It remains to be seen whether Apple will strictly enforce these rules, but the news sparked a colorful range of reactions from industry insiders. Some amusingly questioned what an app like VRChat, one of the most popular platforms in the industry with millions of monthly active users, should do. Others debated at the intersection of philosophy of language and branding to explore Apple’s broader marketing strategy.

Those who have worked in this area are certainly aware of the longstanding absurdity of relying on an inconsistent patchwork of terms.

While no one company has successfully forced linguistic consensus yet, this is certainly not the first time a company has set out to define this category in the minds of consumers.

In 2017, as Google first started selling VR devices, they attempted to steer the industry toward the term “immersive computing.” Around the same time Microsoft took aim at branding supremacy by fixating on the label “mixed reality.” And everyone will remember that Facebook changed the company’s name in an effort to define the broader industry as “the metaverse.”

The term spatial computing is certainly not an Apple invention. It’s thought to have been first introduced in the modern sense by MIT’s Simon Greenwold in his 2003 thesis paper, and has been in use for much of the past decade. Like many others, I’ve long found the term to be the most useful at capturing the main contribution of these technologies—that they make use of three-dimensional space to develop interfaces that are more intuitive for our nervous systems.

A winding etymological journey for a technology is also not unique to computer interfaces. All new technologies cycle through ever-evolving labels that often start by relating them to familiar concepts. The word “movie” began life as “moving picture” to describe how a collection of still images seemed to “move,” like flipping through a picture book. In the early 1900s, the shorter slang term movie appeared in comic strips and quickly caught on with the public. Before the term “computer” referred to machines, it described a person whose job was to perform mathematical calculations. And the first automobiles were introduced to the public as “horseless carriages,” which should remind us of today’s use of the term “driverless car.”

Scholars of neuroscience, linguistics, and psychology will be especially familiar with the ways in which language—and the use of words—can impact how we relate to the world. When a person hears a word, a rich network of interconnected ideas, images, and associations is activated in our mind. In that sense, words can be thought of as bundles of concepts and a shortcut to making sense of the world.

The challenge with labeling emerging technologies is they can be so new to our experience, our brains haven’t yet constructed a fixed set of bundled concepts to relate to.

The word “car,” for example, brings to mind attributes like “four wheels,” “steering wheel,” and “machine used to move people around.” Over time, bundles of associations like these become rooted in the mind as permanent networks of relationships which can help us quickly process our environment. But this can also create limitations and risk overlooking disruptions due to an environment which has changed. Referring to autonomous driving technology as “driverless cars” might result in someone overlooking a “driverless car” small enough to carry packages on a sidewalk. It’s the same technology, but not one most people might refer to as a car.

This might sound like useless contemplation on the role of semantics, but the words we use have real implications on the business of emerging technologies. In 1980, AT&T hired the consultancy McKinsey to predict how many people would be using mobile phones by the year 2000. Their analysis estimated no more than 900,000 devices by the turn of the century, and because of the advice, AT&T exited the hardware business. Twenty years later, they recognized how unhelpful that advice had been as 900,000 phones were being sold every three days in North America alone.

While in no way defending their work, I hold the opinion that in some ways McKinsey wasn’t wrong. Both AT&T and McKinsey may have been misled by the bundle of concepts the word “mobile phone” would have elicited in the year 1980. At that time, devices were large, as heavy as ten pounds or more, cost thousands of dollars, and had a painfully short battery life. There certainly wasn’t a large market for those phones. A better project for AT&T and McKinsey might have been to explore what the term “mobile phone” would even refer to in 20 years. Those devices were practical, compact, and affordable.

A more recent example might be the term “metaverse.” A business operations person focused on digital twins has a very different bundle of associations in their mind when hearing the word metaverse than a marketing person focused on brand activations in virtual worlds like Roblox. I’ve worked with plenty of confused senior leaders who have been pitched very different kinds of projects carrying the label “metaverse,” leading to uncertainty about what the term really means.

As for our as-of-yet-unnamed 3D computing interfaces, it’s still unclear what label will conquer the minds of mainstream consumers. During an interview with Matt Miesnieks, a serial entrepreneur and VC, about his company 6D.ai—which was later sold to Niantic—I asked what we might end up calling this stuff. Six years after that discussion, I’m reminded of his response.

“Probably whatever Apple decides to call it.”

Image Credit: James Yarema / Unsplash

Google DeepMind’s New AI Matches Gold Medal Performance in Math Olympics

0

After cracking an unsolvable mathematics problem last year, AI is back to tackle geometry.

Developed by Google DeepMind, a new algorithm, AlphaGeometry, can crush problems from past International Mathematical Olympiads—a top-level competition for high schoolers—and matches the performance of previous gold medalists.

When challenged with 30 difficult geometry problems, the AI successfully solved 25 within the standard allotted time, beating previous state-of-the-art algorithms by 15 answers.

While often considered the bane of high school math class, geometry is embedded in our everyday life. Art, astronomy, interior design, and architecture all rely on geometry. So do navigation, maps, and route planning. At its core, geometry is a way to describe space, shapes, and distances using logical reasoning.

In a way, solving geometry problems is a bit like playing chess. Given some rules—called theorems and proofs—there’s a limited number of solutions to each step, but finding which one makes sense relies on flexible reasoning conforming to stringent mathematical rules.

In other words, tackling geometry requires both creativity and structure. While humans develop these mental acrobatic skills through years of practice, AI has always struggled.

AlphaGeometry cleverly combines both features into a single system. It has two main components: A rule-bound logical model that attempts to find an answer, and a large language model to generate out-of-the-box ideas. If the AI fails to find a solution based on logical reasoning alone, the language model kicks in to provide new angles. The result is an AI with both creativity and reasoning skills that can explain its solution.

The system is DeepMind’s latest foray into solving mathematical problems with machine intelligence. But their eyes are on a larger prize. AlphaGeometry is built for logical reasoning in complex environments—such as our chaotic everyday world. Beyond mathematics, future iterations could potentially help scientists find solutions in other complicated systems, such as deciphering brain connections or unraveling genetic webs that lead to disease.

“We’re making a big jump, a big breakthrough in terms of the result,” study author Dr. Trieu Trinh told the New York Times.

Double Team

A quick geometry question: Picture a triangle with both sides equal in length. How do you prove the bottom two angles are exactly the same?

This is one of the first challenges AlphaGeometry faced. To solve it, you need to fully grasp rules in geometry but also have creativity to inch towards the answer.

“Proving theorems showcases the mastery of logical reasoning…signifying a remarkable problem-solving skill,” the team wrote in research published today in Nature.

Here’s where AlphaGeometry’s architecture excels. Dubbed a neuro-symbolic system, it first tackles a problem with its symbolic deduction engine. Imagine these algorithms as a grade A student that strictly studies math textbooks and follows rules. They’re guided by logic and can easily lay out every step leading to a solution—like explaining a line of reasoning in a math test.

These systems are old school but incredibly powerful, in that they don’t have the “black box” problem that haunts much of modern deep learning algorithms.

Deep learning has reshaped our world. But due to how these algorithms work, they often can’t explain their output. This just won’t do when it comes to math, which relies on stringent logical reasoning that can be written down.

Symbolic deduction engines counteract the black box problem in that they’re rational and explainable. But faced with complex problems, they’re slow and struggle to flexibly adapt.

Here’s where large language models come in. The driving force behind ChatGPT, these algorithms are excellent at finding patterns in complicated data and generating new solutions, if there’s enough training data. But they often lack the ability to explain themselves, making it necessary to double check their results.

AlphaGeometry combines the best of both worlds.

When faced with a geometry problem, the symbolic deduction engine gives it a go first. Take the triangle problem. The algorithm “understands” the premise of the question, in that it needs to prove the bottom two angles are the same. The language model then suggests drawing a new line from the top of the triangle straight down to the bottom to help solve the problem. Each new element that moves the AI towards the solution is dubbed a “construct.”

The symbolic deduction engine takes the advice and writes down the logic behind its reasoning. If the construct doesn’t work, the two systems go through multiple rounds of deliberation until AlphaGeometry reaches the solution.

The whole setup is “akin to the idea of ‘thinking, fast and slow,’” wrote the team on DeepMind’s blog. “One system provides fast, ‘intuitive’ ideas, and the other, more deliberate, rational decision-making.”

We Are the Champions

Unlike text or audio files, there’s a dearth of examples focused on geometry, which made it difficult to train AlphaGeometry.

As a workaround, the team generated their own dataset featuring 100 million synthetic examples of random geometric shapes and mapped relationships between points and lines—similar to how you solve geometry in math class, but at a far larger scale.

From there, the AI grasped rules of geometry and learned to work backwards from the solution to figure out if it needed to add any constructs. This cycle allowed the AI to learn from scratch without any human input.

Putting the AI to the test, the team challenged it with 30 Olympiad problems from over a decade of previous competitions. The generated results were evaluated by a previous Olympiad gold medalist, Evan Chen, to ensure their quality.

In all, the AI matched the performance of past gold medalists, completing 25 problems within the time limit. The previous state-of-the-art result was 10 correct answers.

“AlphaGeometry’s output is impressive because it’s both verifiable and clean,” Chen said. “It uses classical geometry rules with angles and similar triangles just as students do.”

Beyond Math

AlphaGeometry is DeepMind’s latest foray into mathematics. In 2021, their AI cracked mathematical puzzles that had stumped humans for decades. More recently, they used large language models to reason STEM problems at the college level and cracked a previously “unsolvable” math problem based on a card game with the algorithm FunSearch.

For now, AlphaGeometry is tailored to geometry, and with caveats. Much of geometry is visual, but the system can’t “see” the drawings, which could expedite problem solving. Adding images, perhaps with Google’s Gemini AI, launched late last year, may bolster its geometric smarts.

A similar strategy could also expand AlphaGeometry’s reach to a wide range of scientific domains that require stringent reasoning with a touch of creativity. (Let’s be real—it’s all of them.)

“Given the wider potential of training AI systems from scratch with large-scale synthetic data, this approach could shape how the AI systems of the future discover new knowledge, in math and beyond,” wrote the team.

Image Credit: Joel Filipe / Unsplash