Home Blog Page 2

Dyson Spheres: Astronomers Report Potential Candidates for Alien Megastructures—Here’s What to Make of It

0

There are three ways to look for evidence of alien technological civilizations. One is to look out for deliberate attempts by them to communicate their existence, for example, through radio broadcasts. Another is to look for evidence of them visiting the solar system. And a third option is to look for signs of large-scale engineering projects in space.

A team of astronomers have taken the third approach by searching through recent astronomical survey data to identify seven candidates for alien megastructures, known as Dyson spheres, “deserving of further analysis.”

This is a detailed study looking for “oddballs” among stars—objects that might be alien megastructures. However, the authors are careful not to make any overblown claims. The seven objects, all located within 1,000 light-years of Earth, are “M-dwarfs”—a class of stars that are smaller and less bright than the sun.

Dyson spheres were first proposed by the physicist Freeman Dyson in 1960 as a way for an advanced civilization to harness a star’s power. Consisting of floating power collectors, factories, and habitats, they’d take up more and more space until they eventually surrounded almost the entire star like a sphere.

What Dyson realized is that these megastructures would have an observable signature. Dyson’s signature (which the team searched for in the recent study) is a significant excess of infrared radiation. That’s because megastructures would absorb visible light given off by the star, but they wouldn’t be able to harness it all. Instead, they’d have to “dump” excess energy as infrared light with a much longer wavelength.

Unfortunately, such light can also be a signature of a lot of other things, such as a disc of gas and dust or discs of comets and other debris. But the seven promising candidates aren’t obviously due to a disc, as they weren’t good fits to disc models.

It is worth noting there is another signature of a Dyson sphere: that visible light from the star dips as the megastructure passes in front of it. Such a signature has been found before. There was a lot of excitement about Tabby’s Star, or KIC 8462852, which showed many really unusual dips in its light that could be due to an alien megastructure.

Image of Tabby's Star in infrared and ultraviolet.
Tabby’s Star in infrared (left) and ultraviolet (right). Image Credit: Infrared: IPAC/NASA / Ultraviolet: STScI /NASA via Wikimedia Commons

It almost certainly isn’t an alien megastructure. A variety of natural explanations have been proposed, such as clouds of comets passing through a dust cloud. But it is an odd observation. An obvious follow up on the seven candidates would be to look for this signature as well.

The Case Against Dyson Spheres

Dyson spheres may well not even exist, however. I think they are unlikely to be there. That’s not to say they couldn’t exist, rather that any civilization capable of building them would probably not need to (unless it was some mega art project).

Dyson’s reasoning for considering such megastructures assumed that advanced civilizations would have vast power requirements. Around the same time, astronomer Nikolai Kardashev proposed a scale on which to rate the advancement of civilizations, which was based almost entirely on their power consumption.

In the 1960s, this sort of made sense. Looking back over history, humanity had just kept exponentially increasing its power use as technology advanced and the number of people increased, so they just extrapolated this ever-expanding need into the future.

However, our global energy use has started to grow much more slowly over the past 50 years, and especially over the last decade. What’s more, Dyson and Kardashev never specified what these vast levels of power would be used for, they just (fairly reasonably) assumed they’d be needed to do whatever it is that advanced alien civilizations do.

But as we now look ahead to future technologies, we see efficiency, miniaturization, and nanotechnologies promise vastly lower power use (the performance per watt of pretty much all technologies is constantly improving).

A quick calculation reveals that, if we wanted to collect 10 percent of the sun’s energy at the distance the Earth is from the sun, we’d need a surface area equal to 1 billion Earths. And if we had a super-advanced technology that could make the megastructure only 10 kilometers thick, that’d mean we’d need about a million Earths worth of material to build them from.

A significant problem is that our solar system only contains about 100 Earths worth of solid material, so our advanced alien civilization would need to dismantle all the planets in 10,000 planetary systems and transport it to the star to build their Dyson sphere. To do it with the material available in a single system, each part of the megastructure could only be one meter thick.

This is assuming they use all the elements available in a planetary system. If they needed, say, lots of carbon to make their structures, then we’re looking at dismantling millions of planetary systems to get hold of it. Now, I’m not saying a super-advanced alien civilization couldn’t do this, but it is one hell of a job.

I’d also strongly suspect that by the time a civilization got to the point of having the ability to build a Dyson sphere, they’d have a better way of getting the power than using a star, if they really needed it (I have no idea how, but they are a super-advanced civilization).

Maybe I’m wrong, but it can’t hurt to look.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Kevin Gill / Flickr

Can ChatGPT Mimic Theory of Mind? Psychology Is Probing AI’s Inner Workings

0

If you’ve ever vented to ChatGPT about troubles in life, the responses can sound empathetic. The chatbot delivers affirming support, and—when prompted—even gives advice like a best friend.

Unlike older chatbots, the seemingly “empathic” nature of the latest AI models has already galvanized the psychotherapy community, with many wondering if  they can assist therapy.

The ability to infer other people’s mental states is a core aspect of everyday interaction. Called “theory of mind,” it lets us guess what’s going on in someone else’s mind, often by interpreting speech. Are they being sarcastic? Are they lying? Are they implying something that’s not overtly said?

“People care about what other people think and expend a lot of effort thinking about what is going on in other minds,” wrote Dr. Cristina Becchio and colleagues at the University Medical Center Hanburg-Eppendorf in a new study in Nature Human Behavior.”

In the study, the scientists asked if ChatGPT and other similar chatbots—which are based on machine learning algorithms called large language models—can also guess other people’s mindsets. Using a series of psychology tests tailored for certain aspects of theory of mind, they pitted two families of large language models, including OpenAI’s GPT series and Meta’s LLaMA 2, against over 1,900 human participants.

GPT-4, the algorithm behind ChatGPT, performed at, or even above, human levels in some tasks, such as identifying irony. Meanwhile, LLaMA 2 beat both humans and GPT at detecting faux pas—when someone says something they’re not meant to say but don’t realize it.

To be clear, the results don’t confirm LLMs have theory of mind. Rather, they show these algorithms can mimic certain aspects of this core concept that “defines us as humans,” wrote the authors.

What’s Not Said

By roughly four years old, children already know that people don’t always think alike. We have different beliefs, intentions, and needs. By placing themselves into other people’s shoes, kids can begin to understand other perspectives and gain empathy.

First introduced in 1978, theory of mind is a lubricant for social interactions. For example, if you’re standing near a closed window in a stuffy room, and someone nearby says, “It’s a bit hot in here,” you have to think about their perspective to intuit they’re politely asking you to open the window.

When the ability breaks down—for example, in autism—it becomes difficult to grasp other people’s emotions, desires, intentions, and to pick up deception. And we’ve all experienced when texts or emails lead to misunderstandings when a recipient misinterprets the sender’s meaning.

So, what about the AI models behind chatbots?

Man Versus Machine

Back in 2018, Dr. Alan Winfield, a professor in the ethics of robotics at the University of West England, championed the idea that theory of mind could let AI “understand” people and other robots’ intentions. At the time, he proposed giving an algorithm a programmed internal model of itself, with common sense about social interactions built in rather than learned.

Large language models take a completely different approach, ingesting massive datasets to generate human-like responses that feel empathetic. But do they exhibit signs of theory of mind?

Over the years, psychologists have developed a battery of tests to study how we gain the ability to model another’s mindset. The new study pitted two versions of OpenAI’s GPT models (GPT-4 and GPT-3.5) and Meta’s LLaMA-2-Chat against 1,907 healthy human participants. Based solely on text descriptions of social scenarios and using comprehensive tests spanning different theories of theory of mind abilities, they had to gauge the fictional person’s “mindset.”

Each test was already well-established for measuring theory of mind in humans in psychology.

The first, called “false belief,” is often used to test toddlers as they gain a sense of self and recognition of others. As an example, you listen to a story: Lucy and Mia are in the kitchen with a carton of orange juice in the cupboard. When Lucy leaves, Mia puts the juice in the fridge. Where will Lucy look for the juice when she comes back?

Both humans and AI guessed nearly perfectly that the person who’d left the room when the juice was moved would look for it where they last remembered seeing it. But slight changes tripped the AI up. When changing the scenario—for example, the juice was transported between two transparent containers—GPT models struggled to guess the answer. (Though, for the record, humans weren’t perfect on this either in the study.)

A more advanced test is “strange stories,” which relies on multiple levels of reasoning to test for advanced mental capabilities, such as misdirection, manipulation, and lying. For example, both human volunteers and AI models were told the story of Simon, who often lies. His brother Jim knows this and one day found his Ping-Pong paddle missing. He confronts Simon and asks if it’s under the cupboard or his bed. Simon says it’s under the bed. The test asks: Why would Jim look in the cupboard instead?

Out of all AI models, GPT-4 had the most success, reasoning that “the big liar” must be lying, and so it’s better to choose the cupboard. Its performance even trumped human volunteers.

Then came the “faux pas” study. In prior research, GPT models struggled to decipher these social situations. During testing, one example depicted a person shopping for new curtains, and while putting them up, a friend casually said, “Oh, those curtains are horrible, I hope you’re going to get some new ones.” Both humans and AI models were presented with multiple similar cringe-worthy scenarios and asked if the witnessed response was appropriate. “The correct answer is always no,” wrote the team.

GPT-4 correctly identified that the comment could be hurtful, but when asked whether the friend knew about the context—that the curtains were new—it struggled with a correct answer. This could be because the AI couldn’t infer the mental state of the person, and that recognizing a faux pas in this test relies on context and social norms not directly explained in the prompt, explained the authors. In contrast, LLaMA-2-Chat outperformed humans, achieving nearly 100 percent accuracy except for one run. It’s unclear why it has such as an advantage.

Under the Bridge

Much of communication isn’t what’s said, but what’s implied.

Irony is maybe one of the hardest concepts to translate between languages. When tested with an adapted psychological test for autism, GPT-4 surprisingly outperformed human participants in recognizing ironic statements—of course, through text only, without the usual accompanying eye-roll.

The AI also outperformed humans on a hinting task—basically, understanding an implied message. Derived from a test for assessing schizophrenia, it measures reasoning that relies on both memory and cognitive ability to weave and assess a coherent narrative. Both participants and AI models were given 10 written short skits, each depicting an everyday social interaction. The stories ended with a hint of how best to respond with open-ended answers. Over 10 stories, GPT-4 won against humans.

For the authors, the results don’t mean LLMs already have theory of mind. Each AI struggled with some aspects. Rather, they think the work highlights the importance of using multiple psychology and neuroscience tests—rather than relying on any one—to probe the opaque inner workings of machine minds. Psychology tools could help us better understand how LLMs “think”—and in turn, help us build safer, more accurate, and more trustworthy AI.

There’s some promise that “artificial theory of mind may not be too distant an idea,” wrote the authors.

Image Credit: Abishek / Unsplash

Scientists Are Working Towards a Unified Theory of Consciousness

0

The origin of consciousness has teased the minds of philosophers and scientists for centuries. In the last decade, neuroscientists have begun to piece together its neural underpinnings—that is, how the brain, through its intricate connections, transforms electrical signaling between neurons into consciousness.

Yet the field is fragmented, an international team of neuroscientists recently wrote in a new paper in Neuron. Many theories of consciousness contradict each other, with different ideas about where and how consciousness emerges in the brain.

Some theories are even duking it out in a mano-a-mano test by imaging the brains of volunteers as they perform different tasks in clinical test centers across the globe.

But unlocking the neural basis of consciousness doesn’t have to be confrontational. Rather, theories can be integrated, wrote the authors, who were part of the Human Brain Project—a massive European endeavor to map and understand the brain—and specialize in decoding brain signals related to consciousness.

Not all authors agree on the specific brain mechanisms that allow us to perceive the outer world and construct an inner world of “self.” But by collaborating, they merged their ideas, showing that different theories aren’t necessarily mutually incompatible—in fact, they could be consolidated into a general framework of consciousness and even inspire new ideas that help unravel one of the brain’s greatest mysteries.

If successful, the joint mission could extend beyond our own noggins. Brain organoids, or “mini-brains,” that roughly mimic early human development are becoming increasingly sophisticated, spurring ethical concerns about their potential for developing self-awareness (to be clear, there aren’t any signs). Meanwhile, similar questions have been raised about AI. A general theory of consciousness, based on the human mind, could potentially help us evaluate these artificial constructs.

“Is it realistic to reconcile theories, or even aspire to a unified theory of consciousness?” the authors asked. “We take the standpoint that the existence of multiple theories is a sign of healthiness in this nascent field…such that multiple theories can simultaneously contribute to our understanding.”

Lost in Translation

I’m conscious. You are too. We see, smell, hear, and feel. We have an internal world that tells us what we’re experiencing. But the lines get blurry for people in different stages of coma or for those locked-in—they can still perceive their surroundings but can’t physically respond. We lose consciousness in sleep every night and during anesthesia. Yet, somehow, we regain consciousness. How?

With extensive imaging of the brain, neuroscientists today agree that consciousness emerges from the brain’s wiring and activity. But multiple theories argue about how electrical signals in the brain produce rich and intimate experiences of our lives.

Part of the problem, wrote the authors, is that there isn’t a clear definition of “consciousness.” In this paper, they separated the term into two experiences: one outer, one inner. The outer experience, called phenomenal consciousness, is when we immediately realize what we’re experiencing—for example, seeing a total solar eclipse or the northern lights.

The inner experience is a bit like a “gut feeling” in that it helps to form expectations and types of memory, so that tapping into it lets us plan behaviors and actions.

Both are aspects of consciousnesses, but the difference is hardly delineated in previous work. It makes comparing theories difficult, wrote the authors, but that’s what they set out to do.

Meet the Contenders

Using their “two experience” framework, they examined five prominent consciousness theories.

The first, the global neuronal workspace theory, pictures the brain as a city of sorts. Each local brain region “hub” dynamically interacts with a “global workspace,” which integrates and broadcasts information to other hubs for further processing—allowing information to reach the consciousness level. In other words, we only perceive something when all pieces of sensory information—sight, hearing, touch, taste—are woven into a temporary neural sketchpad. According to this theory, the seat of consciousness is in the frontal parts of the brain.

The second, integrated information theory, takes a more globalist view. The idea is that consciousness stems from a series of cause-effect reactions from the brain’s networks. With the right neural architecture, connections, and network complexity, consciousness naturally emerges. The theory suggests the back of the brain sparks consciousness.

Then there’s dendritic integration theory, the coolest new kid in town. Unlike previous ideas, this theory waved the front or back of the brain goodbye and instead zoomed in on single neurons in the cortex, the outermost part of the brain and a hub for higher cognitive functions such as reasoning and planning.

The cortex has extensive connections to other parts of the brain—for example, those that encode memories and emotions. One type of neuron, deep inside the cortex, especially stands out. Physically, these neurons resemble trees with extensive “roots” and “branches.” The roots connect to other parts of the brain, whereas the upper branches help calculate errors in the neuron’s computing. In turn, these upper branches generate an error signal that corrects mistakes through multiple rounds of learning.

The two compartments, while physically connected, go about their own business—turning a single neuron into multiple computers. Here’s the crux: There’s a theoretical “gate” between the upper and lower neural “offices” for each neuron. During consciousness, the gate opens, allowing information to flow between the cortex and other brain regions. In dreamless sleep and other unconscious states, the gate closes.

Like a light switch, this theory suggests that consciousness is supported by flicking individual neuron gates on or off on a grand scale.

The last two theories propose that recurrent processing in the brain—that is, it learns from previous experiences—is essential for consciousness. Instead of “experiencing” the world, the brain builds an internal simulation that constantly predicts the “here and now” to control what we perceive.

A Unified Theory?

All the theories have extensive experiments to back up their claims. So, who’s right? To the authors, the key is to consider consciousness not as a singular concept, but as a “ladder” of sorts. The brain functions at multiple levels: cells, local networks, brain regions, and finally, the whole brain.

When examining theories of consciousness, it also makes sense to delineate between different levels. For example, the dendritic integration theory—which considers neurons and their connections—is on the level of single cells and how they contribute to consciousness. It makes the theory “neutral,” in that it can easily fit into ideas at a larger scale—those that mostly rely on neural network connections or across larger brain regions.

Although it’s seemingly difficult to reconcile various ideas about consciousness, two principles tie them together, wrote the team. One is that consciousness requires feedback, within local neural circuits and throughout the brain. The other is integration, in that any feedback signals need to be readily incorporated back into neural circuits, so they can change their outputs. Finally, all authors agree that local, short connections are vital but not enough. Long distance connections from the cortex to deeper brain areas are required for consciousness.

So, is an integrated theory of consciousness possible? The authors are optimistic. By defining multiple aspects of consciousness—immediate responses versus internal thoughts—it’ll be clearer how to explore and compare results from different experiments. For now, the global neuronal workspace theory mostly focuses on the “inner experience” that leads to consciousness, whereas others try to tackle the “outer experience”—what we immediately experience.

For the theories to merge, the latter groups will have to explain how consciousness is used for attention and planning, which are hallmarks for immediate responses. But fundamentally, wrote the authors, they are all based on different aspects of neuronal connections near and far. With more empirical experiments, and as increasingly more sophisticated brain atlases come online, they’ll move the field forward.

Hopefully, the authors write, “an integrated theory of consciousness…may come within reach within the next years or decades.”

Image Credit: SIMON LEE / Unsplash

This Week’s Awesome Tech Stories From Around the Web (Through May 18)

ARTIFICIAL INTELLIGENCE

It’s Time to Believe the AI Hype
Steven Levy | Wired
“There’s universal agreement in the tech world that AI is the biggest thing since the internet, and maybe bigger. …Skeptics might try to claim that this is an industry-wide delusion, fueled by the prospect of massive profits. But the demos aren’t lying. We will eventually become acclimated to the AI marvels unveiled this week. The smartphone once seemed exotic; now it’s an appendage no less critical to our daily life than an arm or a leg. At a certain point AI’s feats, too, may not seem magical any more.”

archive page

COMPUTING

How to Put a Datacenter in a Shoebox
Anna Herr and Quentin Herr | IEEE Spectrum
“At Imec, we have spent the past two years developing superconducting processing units that can be manufactured using standard CMOS tools. A processor based on this work would be one hundred times as energy efficient as the most efficient chips today, and it would lead to a computer that fits a data-center’s worth of computing resources into a system the size of a shoebox.”

BIOTECH

IndieBio’s SF Incubator Lineup Is Making Some Wild Biotech Promises
Devin Coldewey | TechCrunch
“We took special note of a few, which were making some major, bordering on ludicrous, claims that could pay off in a big way. Biotech has been creeping out in recent years to touch adjacent industries, as companies find how much they rely on outdated processes or even organisms to get things done. So it may not surprise you that there’s a microbiome company in the latest batch—but you might be surprised when you hear it’s the microbiome of copper ore.”

TECH

It’s the End of Google Search as We Know It
Lauren Goode | Wired
“It’s as though Google took the index cards for the screenplay it’s been writing for the past 25 years and tossed them into the air to see where the cards might fall. Also: The screenplay was written by AI. These changes to Google Search have been long in the making. Last year the company carved out a section of its Search Labs, which lets users try experimental new features, for something called Search Generative Experience. The big question since has been whether, or when, those features would become a permanent part of Google Search. The answer is, well, now.”

AUTOMATION

Waymo Says Its Robotaxis Are Now Making 50,000 Paid Trips Every Week
Mariella Moon | Engadget
“If you’ve been seeing more Waymo robotaxis recently in Phoenix, San Francisco, and Los Angeles, that’s because more and more people are hailing one for a ride. The Alphabet-owned company has announced on Twitter/X that it’s now serving more than 50,000 paid trips every week across three cities. Waymo One operates 24/7 in parts of those cities. If the company is getting 50,000 rides a week, that means it receives an average of 300 bookings every hour or five bookings every minute.”

CULTURE

Technology Is Probably Changing Us for the Worse—or So We Always Think
Timothy Maher | MIT Technology Review
“We’ve always greeted new technologies with a mixture of fascination and fear,  says Margaret O’Mara, a historian at the University of Washington who focuses on the intersection of technology and American politics. ‘People think: “Wow, this is going to change everything affirmatively, positively,”‘ she says. ‘And at the same time: ‘It’s scary—this is going to corrupt us or change us in some negative way.”‘ And then something interesting happens: ‘We get used to it,’ she says. ‘The novelty wears off and the new thing becomes a habit.'”

TECH

This Is the Next Smartphone Evolution
Matteo Wong | The Atlantic
“Earlier [this week], OpenAI announced its newest product: GPT-4o, a faster, cheaper, more powerful version of its most advanced large language model, and one that the company has deliberately positioned as the next step in ‘natural human-computer interaction.’ …Watching the presentation, I felt that I was witnessing the murder of Siri, along with that entire generation of smartphone voice assistants, at the hands of a company most people had not heard of just two years ago.”

SPACE

In the Race for Space Metals, Companies Hope to Cash In
Sarah Scoles | Undark
“Previous companies have rocketed toward similar goals before but went bust about a half decade ago. In the years since that first cohort left the stage, though, ‘the field has exploded in interest,’ said Angel Abbud-Madrid, director of the Center for Space Resources at the Colorado School of Mines. …The economic picture has improved with the cost of rocket launches decreasing, as has the regulatory environment, with countries creating laws specifically allowing space mining. But only time will tell if this decade’s prospectors will cash in where others have drilled into the red or be buried by their business plans.”

FUTURE

What I Got Wrong in a Decade of Predicting the Future of Tech
Christopher Mims | The Wall Street Journal
“Anniversaries are typically a time for people to get misty-eyed and recount their successes. But after almost 500 articles in The Wall Street Journal, one thing I’ve learned from covering the tech industry is that failures are far more instructive. Especially when they’re the kind of errors made by many people. Here’s what I’ve learned from a decade of embarrassing myself in public—and having the privilege of getting an earful about it from readers.”

FUTURE OF FOOD

Lab-Grown Meat Is on Shelves Now. But There’s a Catch
Matt Reynolds | Wired
“Now cultivated meat is available in one store in Singapore. There is a catch, however: The chicken on sale at Huber’s Butchery contains just 3 percent animal cells. The rest will be made of plant protein—the same kind of ingredients you’d find in plant-based meats that are already on supermarket shelves worldwide. This might feel like a bit of a bait and switch. Didn’t cultivated meat firms promise us real chicken? And now we’re getting plant-based products with a sprinkling of animal cells? That criticism wouldn’t be entirely fair, though.”

Image Credit: Pawel Czerwinski / Unsplash

Smelting Steel With Sunlight: New Solar Trap Tech Could Help Decarbonize Industrial Heat

0

Some of the hardest sectors to decarbonize are industries that require high temperatures like steel smelting and cement production. A new approach uses a synthetic quartz solar trap to generate temperatures of over 1,000 degrees Celsius (1,832 degrees Fahrenheit)—hot enough for a host of carbon-intensive industries.

While most of the focus on the climate fight has been on cleaning up the electric grid and transportation, a surprisingly large amount of fossil fuel usage goes into industrial heat. As much as 25 percent of global energy consumption goes towards manufacturing glass, steel, and cement.

Electrifying these processes is challenging because it’s difficult to reach the high temperatures required. Solar receivers, which use thousands of sun-tracking mirrors to concentrate energy from the sun, have shown promise as they can hit temperatures of 3,000 C. But they’re very inefficient when processes require temperatures over 1,000 C because much of the energy is radiated back out.

To get around this, researchers from ETH Zurich in Switzerland showed that adding semi-transparent quartz to a solar receiver could trap solar energy at temperatures as high as 1,050 C. That’s hot enough to replace fossil fuels in a range of highly polluting industries, the researchers say.

“Previous research has only managed to demonstrate the thermal-trap effect up to 170 C,” lead researcher Emiliano Casati said in a press release. “Our research showed that solar thermal trapping works not just at low temperatures, but well above 1,000 C. This is crucial to show its potential for real-world industrial applications.”

The researchers used a silicon carbide disk to absorb solar energy but attached a roughly one-foot-long quartz rod to it. Because quartz is semi-transparent, light is able pass through it, but it also readily absorbs heat and prevents it from being radiated back out.

That meant that when the researchers subjected the quartz rod to simulated sunlight equivalent to 136 suns, the solar energy readily passed through to the silicon plate and was then trapped there. This allowed the plate to heat up to 1,050 C, compared to just 600 C at the other end of the rod.

Simulations of the device found that the quartz’s thermal trapping capabilities could significantly boost the efficiency of solar receivers. Adding a quartz rod to a state-of-the-art receiver could boost efficiency from 40 percent to 70 percent when attempting to hit temperatures of 1,200 C. That kind of efficiency gain could drastically reduce the size, and therefore cost, of solar heat installations.

While still just a proof of concept, the simplicity of the approach means it would probably not be too difficult to apply to existing receiver technology. Companies like Heliogen, which is backed by Bill Gates, has already developed solar furnace technology designed to generate the high temperatures required in a wide range of industries.

Casati says the promise is clear, but work remains to be done to prove its commercial feasibility.

“Solar energy is readily available, and the technology is already here,” he says. “To really motivate industry adoption, we need to demonstrate the economic viability and advantages of this technology at scale.”

But the prospect of replacing such a big chunk of our fossil fuel usage with solar power should be motivation enough to bring this technology to fruition.

Image Credit: A new solar trap built by a team of ETH Zurich scientists reaches 1050 C (Device/Casati et al.)

Scientists Step Toward Quantum Internet With Experiment Under the Streets of Boston

0

A quantum internet would essentially be unhackable. In the future, sensitive information—financial or national security data, for instance, as opposed to memes and cat pictures—would travel through such a network in parallel to a more traditional internet.

Of course, building and scaling systems for quantum communications is no easy task. Scientists have been steadily chipping away at the problem for years. A Harvard team recently took another noteworthy step in the right direction. In a paper published this week in Nature, the team says they’ve sent entangled photons between two quantum memory nodes 22 miles (35 kilometers) apart on existing fiber optic infrastructure under the busy streets of Boston.

“Showing that quantum network nodes can be entangled in the real-world environment of a very busy urban area is an important step toward practical networking between quantum computers,” Mikhail Lukin, who led the project and is a physics professor at Harvard, said in a press release.

The team leased optical fiber under the Boston streets, connecting the two memory nodes located at Harvard by way of a 22-mile (35-kilometer) loop of cable. Image Credit: Can Knaut via OpenStreetMap

One way a quantum network can transmit information is by using entanglement, a quantum property where two particles, likely photons in this case, are linked so a change in the state of one tells us about the state of the other. If the sender and receiver of information each have one of a pair of entangled photons, they can securely transmit data using them. This means quantum communications will rely on generating enormous numbers of entangled photons and reliably sending them to far-off destinations.

Scientists have sent entangled particles long distances over fiber optic cables before, but to make a quantum internet work, particles will need to travel hundreds or thousands of miles. Because cables tend to absorb photons over such distances, the information will be lost—unless it can be periodically refreshed.

Enter quantum repeaters.

You can think of a repeater as a kind of internet gas station. Information passing through long stretches of fiber optic cables naturally degrades. A repeater refreshes that information at regular intervals, strengthening the signal and maintaining its fidelity. A quantum repeater is the same thing, only it also preserves entanglement.

That scientists have yet to build a quantum repeater is one reason we’re still a ways off from a working quantum internet at scale. Which is where the Harvard study comes in.

The team of researchers from Harvard and Amazon Web Services (AWS) have been working on quantum memory nodes. Each node houses a piece of diamond with an atom-sized hole, or silicon-vacancy center, containing two qubits: one for storage, one for communication. The nodes are basically small quantum computers, operating at near absolute zero, that can receive, record, and transmit quantum information. The Boston experiment, according to the team, is the longest distance anyone has sent information between such devices and a big step towards a quantum repeater.

“Our experiment really put us in a position where we’re really close to working on a quantum repeater demonstration,” Can Knaut, a Harvard graduate student in Lukin’s lab, told New Scientist.

Next steps include expanding the system to include multiple nodes.

Along those lines, a separate group in China, using a different technique for quantum memory involving clouds of rubidium atoms, recently said they’d linked three nodes 6 miles (10 kilometers) apart. The same group, led by Xiao-Hui Bao at the University of Science and Technology of China, had previously entangled memory nodes 13.6 miles (22 kilometers) apart.

It’ll take a lot more work to make the technology practical. Researchers need to increase the rate at which their machines entangle photons, for example. But as each new piece falls into place, the prospect of unhackable communications gets a bit closer.

Image Credit: Visax / Unsplash

‘Noise’ in the Machine: Human Differences in Judgment Lead to Problems for AI

0

Many people understand the concept of bias at some intuitive level. In society, and in artificial intelligence systems, racial and gender biases are well documented.

If society could somehow remove bias, would all problems go away? The late Nobel laureate Daniel Kahneman, who was a key figure in the field of behavioral economics, argued in his last book that bias is just one side of the coin. Errors in judgments can be attributed to two sources: bias and noise.

Bias and noise both play important roles in fields such as law, medicine, and financial forecasting, where human judgments are central. In our work as computer and information scientists, my colleagues and I have found that noise also plays a role in AI.

Statistical Noise

Noise in this context means variation in how people make judgments of the same problem or situation. The problem of noise is more pervasive than initially meets the eye. A seminal work, dating back all the way to the Great Depression, has found that different judges gave different sentences for similar cases.

Worryingly, sentencing in court cases can depend on things such as the temperature and whether the local football team won. Such factors, at least in part, contribute to the perception that the justice system is not just biased but also arbitrary at times.

Other examples: Insurance adjusters might give different estimates for similar claims, reflecting noise in their judgments. Noise is likely present in all manner of contests, ranging from wine tastings to local beauty pageants to college admissions.

Noise in the Data

On the surface, it doesn’t seem likely that noise could affect the performance of AI systems. After all, machines aren’t affected by weather or football teams, so why would they make judgments that vary with circumstance? On the other hand, researchers know that bias affects AI, because it is reflected in the data that the AI is trained on.

For the new spate of AI models like ChatGPT, the gold standard is human performance on general intelligence problems such as common sense. ChatGPT and its peers are measured against human-labeled commonsense datasets.

Put simply, researchers and developers can ask the machine a commonsense question and compare it with human answers: “If I place a heavy rock on a paper table, will it collapse? Yes or No.” If there is high agreement between the two—in the best case, perfect agreement—the machine is approaching human-level common sense, according to the test.

So where would noise come in? The commonsense question above seems simple, and most humans would likely agree on its answer, but there are many questions where there is more disagreement or uncertainty: “Is the following sentence plausible or implausible? My dog plays volleyball.” In other words, there is potential for noise. It is not surprising that interesting commonsense questions would have some noise.

But the issue is that most AI tests don’t account for this noise in experiments. Intuitively, questions generating human answers that tend to agree with one another should be weighted higher than if the answers diverge—in other words, where there is noise. Researchers still don’t know whether or how to weigh AI’s answers in that situation, but a first step is acknowledging that the problem exists.

Tracking Down Noise in the Machine

Theory aside, the question still remains whether all of the above is hypothetical or if in real tests of common sense there is noise. The best way to prove or disprove the presence of noise is to take an existing test, remove the answers and get multiple people to independently label them, meaning provide answers. By measuring disagreement among humans, researchers can know just how much noise is in the test.

The details behind measuring this disagreement are complex, involving significant statistics and math. Besides, who is to say how common sense should be defined? How do you know the human judges are motivated enough to think through the question? These issues lie at the intersection of good experimental design and statistics. Robustness is key: One result, test, or set of human labelers is unlikely to convince anyone. As a pragmatic matter, human labor is expensive. Perhaps for this reason, there haven’t been any studies of possible noise in AI tests.

To address this gap, my colleagues and I designed such a study and published our findings in Nature Scientific Reports, showing that even in the domain of common sense, noise is inevitable. Because the setting in which judgments are elicited can matter, we did two kinds of studies. One type of study involved paid workers from Amazon Mechanical Turk, while the other study involved a smaller-scale labeling exercise in two labs at the University of Southern California and the Rensselaer Polytechnic Institute.

You can think of the former as a more realistic online setting, mirroring how many AI tests are actually labeled before being released for training and evaluation. The latter is more of an extreme, guaranteeing high quality but at much smaller scales. The question we set out to answer was how inevitable is noise, and is it just a matter of quality control?

The results were sobering. In both settings, even on commonsense questions that might have been expected to elicit high—even universal—agreement, we found a nontrivial degree of noise. The noise was high enough that we inferred that between 4 percent and 10 percent of a system’s performance could be attributed to noise.

To emphasize what this means, suppose I built an AI system that achieved 85 percent on a test, and you built an AI system that achieved 91 percent. Your system would seem to be a lot better than mine. But if there is noise in the human labels that were used to score the answers, then we’re not sure anymore that the 6 percent improvement means much. For all we know, there may be no real improvement.

On AI leaderboards, where large language models like the one that powers ChatGPT are compared, performance differences between rival systems are far narrower, typically less than 1 percent. As we show in the paper, ordinary statistics do not really come to the rescue for disentangling the effects of noise from those of true performance improvements.

Noise Audits

What is the way forward? Returning to Kahneman’s book, he proposed the concept of a “noise audit” for quantifying and ultimately mitigating noise as much as possible. At the very least, AI researchers need to estimate what influence noise might be having.

Auditing AI systems for bias is somewhat commonplace, so we believe that the concept of a noise audit should naturally follow. We hope that this study, as well as others like it, leads to their adoption.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Michael Dziedzic / Unsplash

Google and Harvard Map a Tiny Piece of the Human Brain With Extreme Precision

0

Scientists just published the most detailed map of a cubic millimeter of the human brain. Smaller than a grain of rice, the mapped section of brain includes over 57,000 cells, 230 millimeters of blood vessels, and 150 million synapses.

The project, a collaboration between Harvard and Google, is looking to accelerate connectomics—the study of how neurons are wired together—over a much larger scale.

Our brains are like a jungle.

Neuron branches crisscross regions, forming networks that process perception, memories, and even consciousness. Blood vessels tightly wrap around these branches to provide nutrients and energy. Other brain cell types form intricate connections with neurons, support the brain’s immune function, and fine-tune neural network connections.

In biology, structure determines function. Like tracing wires of a computer, mapping components of the brain and their connections can improve our understanding of how the brain works—and when and why it goes wrong. A brain map that charts the jungle inside our heads could help us tackle some of the most perplexing neurological disorders, such as Alzheimer’s disease, and decipher the origins of emotions, thoughts, and behaviors.

Aided by machine learning tools from Google Research, the Harvard team traced neurons, blood vessels, and other brain cells at nanoscale levels. The images revealed previously unknown quirks in the human brain—including mysterious tangles in neuron wiring and neurons that connect through multiple “contacts” to other cells. Overall, the dataset incorporates a massive 1.4 petabytes of information—roughly the storage amount of a thousand high-end laptops—and is free to explore.

“It’s a little bit humbling,” Dr. Viren Jain, a neuroscientist at Google and study author, told Nature. “How are we ever going to really come to terms with all this complexity?” The database, first released as a preprint paper in 2021, has already garnered much enthusiasm in the scientific field.

“It’s probably the most computer-intensive work in all of neuroscience,” Dr. Michael Hawrylycz, a computational neuroscientist at the Allen Institute for Brain Science, who was not involved in the project, told MIT Technology Review.

Why So Complicated?

Many types of brain maps exist. Some chart gene expression in brain cells; others map different cell types across the brain. But the goal is the same. They aim to help scientists understand how the brain works in health and disease.

The connectome details highways between brain regions that “talk” to each other. These connections, called synapses, number in the hundreds of trillions in human brains—on the scale of the number of stars in the universe.

Decades ago, the first whole-brain wiring map detailed all 302 neurons in the roundworm Caenorhabditis elegans. Because its genetics are largely known, the lowly worm delivered insights, such as how the brain and body communicate to increase healthy longevity. Next, scientists charted the fruit fly connectome and found the underpinnings of spatial navigation.

More recently, the MouseLight Project and MICrONS have been deciphering a small chunk of a mouse’s brain—the outermost area called the cortex. It’s hoped such work can help inform neuro-inspired AI algorithms with lower power requirements and higher efficacy.

But mice are not people. In the new study, scientists mapped a cubic millimeter of human brain tissue from the temporal cortex—a nexus that’s important for memory, emotions, and sensations. Although just one-millionth of a human brain, the effort reconstructed connections in 3D at nanoscale resolution.

Slice It Up

Sourcing is a challenge when mapping the human brain. Brain tissues rapidly deteriorate after trauma or death, which changes their wiring and chemistry. Brain organoids—”mini-brains” grown in test tubes—somewhat resemble the brain’s architecture, but they can’t replicate the real thing.

Here, the team took a tiny bit of brain tissue from a 45-year-old woman with epilepsy during surgery—the last resort for those who suffer severe seizures and don’t respond to medication.

Using a machine like a deli-meat slicer armed with a diamond knife, the Harvard team, led by connectome expert Dr. Jeff Lichtman, meticulously sliced the sample into 5,019 cross sections. Each was roughly 30 nanometers thick—a fraction of the width of a human hair. They imaged the slices with an electron microscope, capturing nanoscale cellular details, including the “factories” inside cells that produce energy, eliminate waste, or transport molecules.

Piecing these 2D images into a 3D reconstruction is a total headache. A decade ago, scientists had to do it by hand. Jain’s team at Google developed an AI to automate the job. The AI was able to track fragments of whole components—say, a part of a neuron (its body or branches)—and stick them back together throughout the images.

In total, the team pieced together thousands of neurons and over a hundred million synaptic connections. Other brain components included blood vessels and myelin—a protective molecular “sheath” covering neurons. Like electrical insulation, when myelin deteriorates, it causes multiple brain disorders.

“I remember this moment, going into the map and looking at one individual synapse from this woman’s brain, and then zooming out into these other millions of pixels,” Jain told Nature. “It felt sort of spiritual.”

A Whole New World

Even a cursory look at the data led to surprising insights into the brain’s intricate neural wiring.

Cortical neurons have a forest-like structure for input and a single “cable” that delivers output signals. Called axons, these are dotted with thousands of synapses connecting to other cells.

Usually, a synapse grabs onto just one spot of a neighboring neuron. But the new map found a rare, strange group that connects with up to 50 points. “We’ve always had a theory that there would be super connections, if you will, amongst certain cells…But it’s something we’ve never had the resolution to prove,” Dr. Tim Mosca, who was not involved in the work, told Popular Science. These could be extra-potent connections that allow neural communications to go into “autopilot mode,” like when riding a bike or navigating familiar neighborhoods.

More strange structures included “axon whorls” that wrapped around themselves like tangled headphones. An axon’s main purpose is to reach out and connect with other neurons—so why do some fold into themselves? Do they serve a purpose, or are they just a hiccup in brain wiring? It’s a mystery. Another strange observation found pairs of neurons that perfectly mirrored each other. What this symmetry does for the brain is also unknown.

The bottom line: Our understanding of the brain’s connections and inner workings is still only scratching the surface. The new database is a breakthrough, but it’s not perfect. The results are from a single person with epilepsy, which can’t represent everyone. Some wiring changes, for example, may be due to the disorder. The team is planning a follow-up to separate epilepsy-related circuits from those that are more universal in people.

Meanwhile, they’ve opened the entire database for anyone to explore. And the team is also working with scientists to manually examine the results and eliminate potential AI-induced errors during reconstruction. So far, hundreds of cells have been “proofread” and validated by humans, but it’s just a fraction of the 50,000 neurons in the database.

The technology can also be used for other species, such as the zebrafish—another animal model often used in neuroscience research—and eventually the entire mouse brain.

Although this study only traced a tiny nugget of the human brain, the atlas is a stunning way to peek inside its seemingly chaotic wiring and make sense of things. “Further studies using this resource may bring valuable insights into the mysteries of the human brain,” wrote the team.

Image Credit: Google Research and Lichtman Lab

This Week’s Awesome Tech Stories From Around the Web (Through May 11)

ARTIFICIAL INTELLIGENCE

OpenAI Could Unveil Its Google Search Competitor on Monday
Jess Weatherbed | The Verge
“OpenAI is reportedly gearing up to announce a search product powered by artificial intelligence on Monday that could threaten Google’s dominance. That target date, provided to Reuters by ‘two sources familiar with the matter,’ would time the announcement a day before Google kicks off its annual I/O conference, which is expected to focus on the search giant’s own AI model offerings like Gemini and Gemma.”

archive page

ROBOTICS

DeepMind Is Experimenting With a Nearly Indestructible Robot Hand
Jeremy Hsu | New Scientist
“This latest robotic hand developed by the UK-based Shadow Robot Company can go from fully open to closed within 500 milliseconds and perform a fingertip pinch with up to 10 newtons of force. It can also withstand repeated punishment such as pistons punching the fingers from multiple angles or a person smashing the device with a hammer.”

BIOTECH

First Patient Begins Newly Approved Sickle Cell Gene Therapy
Gina Kolata | The New York Times
“On Wednesday, Kendric Cromer, a 12-year-old boy from a suburb of Washington, became the first person in the world with sickle cell disease to begin a commercially approved gene therapy that may cure the condition. For the estimated 20,000 people with sickle cell in the United States who qualify for the treatment, the start of Kendric’s monthslong medical journey may offer hope. But it also signals the difficulties patients face as they seek a pair of new sickle cell treatments.”

SPACE

Commercial Space Stations Approach Launch Phase 
Andrew Jones | IEEE Spectrum
“A changing of the guard in space stations is on the horizon as private companies work towards providing new opportunities for science, commerce, and tourism in outer space. …The challenge [new space stations like Blue Origin’s] Orbital Reef faces is considerable: reimagining successful earthbound technologies—such as regenerative life support systems, expandable habitats and 3D printing—but now in orbit, on a commercially viable platform.”

FUTURE

This Gigantic 3D Printer Could Reinvent Manufacturing
Nate Berg | Fast Company
“This machine isn’t just spitting out basic building materials like some massive glue gun. It’s also able to do subtractive manufacturing, like milling, as well as utilize a robotic arm for more complicated tasks. A built-in system allows it to lay down fibers in a printed object that give it greater structural integrity, allowing printed spans to stretch farther, and enabling factory-based 3D printed buildings to become even larger.”

AUTOMATION

Wayve Raises $1B to Take Its Tesla-Like Technology for Self-Driving to Many Carmakers
Mike Butcher | TechCrunch
“Wayve calls its hardware-agnostic mapless product an ‘Embodied AI,’ and it plans to distribute its platform not just to car makers but also to robotics companies serving manufacturers of all descriptions, allowing the platform to learn from human behavior in a wide variety of real-world environments.”

BIOTECH

The US Is Cracking Down on Synthetic DNA
Emily Mullin | Wired
“Synthesizing DNA has been possible for decades, but it’s become increasingly easier, cheaper, and faster to do so in recent years thanks to new technology that can ‘print’ custom gene sequences. Now, dozens of companies around the world make and ship synthetic nucleic acids en masse. And with AI, it’s becoming possible to create entirely new sequences that don’t exist in nature—including those that could pose a threat to humans or other living things.”

SPACE

Fall Into a Black Hole in Mind-Bending NASA Animation
Robert Lea | Space.com
“If you’ve ever wondered what would happen if you were unlucky enough to fall into a black hole, NASA has your answer. A visualization created on a NASA supercomputer to celebrate the beginning of black hole week on Monday (May 6) takes the viewer on a one-way plunge beyond the event horizon of a black hole.”

ENERGY

A Company Is Building a Giant Compressed-Air Battery in the Australian Outback
Dan Gearino | Wired
“Toronto-based Hydrostor is one of the businesses developing long-duration energy storage that has moved beyond lab scale and is now focusing on building big things. The company makes systems that store energy underground in the form of compressed air, which can be released to produce electricity for eight hours or longer.”

SCIENCE

The Way Whales Communicate Is Closer to Human Language Than We Realized
Rhiannon Williams | MIT Technology Review
“A team of researchers led by Pratyusha Sharma at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) working with Project CETI, a nonprofit focused on using AI to understand whales, used statistical models to analyze whale codas and managed to identify a structure to their language that’s similar to features of the complex vocalizations humans use. Their findings represent a tool future research could use to decipher not just the structure but the actual meaning of whale sounds.”

Image Credit: Benjamin Cheng / Unsplash

Global Carbon Capture Capacity Quadruples as the Biggest Plant Yet Revs Up in Iceland

0

Pulling carbon dioxide out of the atmosphere is likely to be a crucial weapon in the battle against climate change. And now global carbon capture capacity has quadrupled with the opening of the world’s largest direct air capture plant in Iceland.

Scientists and policymakers initially resisted proposals to remove CO2 from the atmosphere, due to concerns it could lead to a reduced sense of urgency around emissions reductions. But with progress on that front falling behind schedule, there’s been growing acceptance that carbon capture will be crucial if we want to avoid the worst consequences of climate change.

A variety of approaches, including reforestation, regenerative agriculture, and efforts to lock carbon up in minerals, could play a role. But the approach garnering most of the attention is direct air capture, which relies on large facilities powered by renewable energy to suck CO2 out of the air.

One of the leaders in this space is Swiss company Climeworks, whose Orca plant in Iceland previously held the title for world’s largest. But this week, the company started operations at a new plant called Mammoth that has nearly ten times the capacity. The facility, also in Iceland, will be able to extract 36,000 tons of CO2 a year, which is nearly four times the 10,000 tons a year currently being captured globally.

“Starting operations of our Mammoth plant is another proof point in Climeworks’ scale-up journey to megaton capacity by 2030 and gigaton by 2050,” co-CEO of Climeworks Jan Wurzbacher said in a statement. “Constructing multiple real-world plants in rapid sequences makes Climeworks the most deployed carbon removal company with direct air capture at the core.”

Climeworks plants use fans to suck air into large collector units filled with a material called a sorbent, which absorbs CO2. Once the sorbent is saturated, the collector shuts and is heated to roughly 212 degrees Fahrenheit to release the CO2.

The Mammoth plant will eventually feature 72 of these collector units, though only 12 are currently operational. That’s still more than Orca’s eight units, which allows it to capture roughly 4,000 tons of CO2 a year. Adding an extra level to the stacks of collectors has also reduced land use per ton of CO2 captured, while a new V-shaped configuration improves airflow, boosting performance.

To permanently store the captured carbon, Climeworks has partnered with Icelandic company Carbfix, which has developed a process to inject CO2 dissolved in water deep into porous rock formations made of basalt. Over the course of a couple years, the dissolved CO2 reacts with the rocks to form solid carbonate minerals that are stable for thousands of years.

With the Orca plant, CO2 had to be transported through hundreds of meters of pipeline to Carbfix’s storage site. But Mammoth features two injection wells on-site reducing transportation costs. It also has a new CO2 absorption tower that dissolves the gas in water at lower pressures, reducing energy costs compared to the previous approach.

Climeworks has much bigger ambitions than Mammoth though. The US government has earmarked $3.5 billion to build four direct air capture hubs, each capable of capturing one million tons of CO2 a year, and Climeworks will provide the technology for one of the proposed facilities in Louisiana.

The company says it’s aiming to reach megaton-scale—removing one million tons a year—by 2030 and gigaton-scale—a billion tons a year by 2050. Hopefully, they won’t be the only ones, because climate forecasts suggest we’ll need to be removing 3.5 gigatons of CO2 a year by 2050 to keep warming below 1.5 degrees Celsius.

There’s also little clarity on the economics of the approach. According to Reuters, Climeworks did not reveal how much it costs Mammoth to remove each ton of CO2, though it said it’s targeting $400-600 per ton by 2030 and $200-350 per ton by 2040. And while plants in Iceland can take advantage of abundant, green geothermal energy, it’s less clear what they will rely on elsewhere.

Either way, there’s growing agreement that carbon capture will be an important part of our efforts to tackle climate change. While Mammoth might not make much of a dent in emissions, it’s a promising sign that direct air capture technology is maturing.

Image Credit: Climeworks

Google DeepMind’s New AlphaFold AI Maps Life’s Molecular Dance in Minutes

0

Proteins are biological workhorses.

They build our bodies and orchestrate the molecular processes in cells that keep them healthy. They also present a wealth of targets for new medications. From everyday pain relievers to sophisticated cancer immunotherapies, most current drugs interact with a protein. Deciphering protein architectures could lead to new treatments.

That was the promise of AlphaFold 2, an AI model from Google DeepMind that predicted how proteins gain their distinctive shapes based on the sequences of their constituent molecules alone. Released in 2020, the tool was a breakthrough half a decade in the making.

But proteins don’t work alone. They inhabit an entire cellular universe and often collaborate with other molecular inhabitants like, for example, DNA, the body’s genetic blueprint.

This week, DeepMind and Isomorphic Labs released a big new update that allows the algorithm to predict how proteins work inside cells. Instead of only modeling their structures, the new version—dubbed AlphaFold 3—can also map a protein’s interactions with other molecules.

For example, could a protein bind to a disease-causing gene and shut it down? Can adding new genes to crops make them resilient to viruses? Can the algorithm help us rapidly engineer new vaccines to tackle existing diseases—or whatever new ones nature throws at us?

“Biology is a dynamic system…you have to understand how properties of biology emerge due to the interactions between different molecules in the cell,” said Demis Hassabis, the CEO of DeepMind, in a press conference.

AlphaFold 3 helps explain “not only how proteins talk to themselves, but also how they talk to other parts of the body,” said lead author Dr. John Jumper.

The team is releasing the new AI online for academic researchers by way of an interface called the AlphaFold Server. With a few clicks, a biologist can run a simulation of an idea in minutes, compared to the weeks or months usually needed for experiments in a lab.

Dr. Julien Bergeron at King’s College London, who builds nano-protein machines but was not involved in the work, said the AI is “transformative science” for speeding up research, which could ultimately lead to nanotech devices powered by the body’s mechanisms alone.

For Dr. Frank Uhlmann at the Francis Crick Laboratory, who gained early access to AlphaFold 3 and used it to study how DNA divides when cells divide, the AI is “democratizing discovery research.”

Molecular Universe

Proteins are finicky creatures. They’re made of strings of molecules called amino acids that fold into intricate three-dimensional shapes that determine what the protein can do.

Sometimes the folding processes goes wrong. In Alzheimer’s disease, misfolded proteins clump into dysfunctional blobs that clog up around and inside brain cells.

Scientists have long tried to engineer drugs to break up disease-causing proteins. One strategy is to map protein structure—know thy enemy (and friends). Before AlphaFold, this was done with electron microscopy, which captures a protein’s structure at the atomic level. But it’s expensive, labor intensive, and not all proteins can tolerate the scan.

Which is why AlphaFold 2 was revolutionary. Using amino acid sequences alone—the constituent molecules that make up proteins—the algorithm could predict a protein’s final structure with startling accuracy. DeepMind used AlphaFold to map the structure of nearly all proteins known to science and how they interact. According to the AI lab, in just three years, researchers have mapped roughly six million protein structures using AlphaFold 2.

But to Jumper, modeling proteins isn’t enough. To design new drugs, you have to think holistically about the cell’s whole ecosystem.

It’s an idea championed by Dr. David Baker at the University of Washington, another pioneer in the protein-prediction space. In 2021, Baker’s team released AI-based software called RoseTTAFold All-Atom to tackle interactions between proteins and other biomolecules.

Picturing these interactions can help solve tough medical challenges, allowing scientists to design better cancer treatments or more precise gene therapies, for example.

“Properties of biology emerge through the interactions between different molecules in the cell,” said Hassabis in the press conference. “You can think about AlphaFold 3 as our first big sort of step towards that.”

A Revamp

AlphaFold 3 builds on its predecessor, but with significant renovations.

One way to gauge how a protein interacts with other molecules is to examine evolution. Another is to map a protein’s 3D structure and—with a dose of physics—predict how it can grab onto other molecules. While AlphaFold 2 mostly used an evolutionary approach—training the AI on what we already know about protein evolution in nature—the new version heavily embraces physical and chemical modeling.

Some of this includes chemical changes. Proteins are often tagged with different chemicals. These tags sometimes change protein structure but are essential to their behavior—they can literally determine a cell’s fate, for example, life, senescence, or death.

The algorithm’s overall setup makes some use of its predecessor’s machinery to map proteins, DNA, and other molecules and their interactions. But the team also looked to diffusion models—the algorithms behind OpenAI’s DALL-E 2 image generator—to capture structures at the atomic level. Diffusion models are trained to reverse noisy images in steps until they arrive at a prediction for what the image (or in this case a 3D model of a biomolecule) should look like without the noise. This addition made a “substantial change” to performance, said Jumper.

Like AlphaFold 2, the new version has a built-in “sanity check” that indicates how confident it is in a generated model so scientists can proofread its outputs. This has been a core component of all their work, said the DeepMind team. They trained the AI using the Protein Data Bank, an open-source compilation of 3D protein structures that’s constantly updated, including new experimentally validated structures of proteins binding to DNA and other biomolecules

Pitted against existing software, AlphaFold 3 broke records. One test for molecular interactions between proteins and small molecules—ones that could become medications—succeeded 76 percent of the time. Previous attempts were successful in roughly 42 percent of cases.

When it comes to deciphering protein functions, AlphaFold 3 “seeks to solve the exact same problem [as RoseTTAFold All-Atom]…but is clearly more accurate,” Baker told Singularity Hub.

But the tool’s accuracy depends on which interaction is being modeled. The algorithm isn’t yet great at protein-RNA interactions, for example, Columbia University’s Mohammed AlQuraishi told MIT Technology Review. Overall, accuracy ranged from 40 to more than 80 percent.

AI to Real Life

Unlike previous iterations, DeepMind isn’t open-sourcing AlphaFold 3’s code. Instead, they’re releasing the tool as a free online platform, called AlphaFold Server, that allows scientists to test their ideas for protein interactions with just a few clicks.

AlphaFold 2 required technical expertise to install and run the software. The server, in contrast, can help people unfamiliar with code to use the tool. It’s for non-commercial use only and can’t be reused to train other machine learning models for protein prediction. But it is freely available for scientists to try. The team envisions the software helping develop new antibodies and other treatments at a faster rate. Isomorphic Labs, a spin-off of DeepMind, is already using AlphaFold 3 to develop medications for a variety of diseases.

For Bergeron, the upgrade is “transformative.” Instead of spending years in the lab, it’s now possible to mimic protein interactions in silico—a computer simulation—before beginning the labor- and time-intensive work of investigating promising solutions using cells.

“I’m pretty certain that every structural biology and protein biochemistry research group in the world will immediately adopt this system,” he said.

Image Credit: Google DeepMind

Astronomers Discover 27,500 New Asteroids Lurking in Archival Images

0

There are well over a million asteroids in the solar system. Most don’t cross paths with Earth, but some do and there’s a risk one of these will collide with our planet. Taking a census of nearby space rocks, then, is prudent. As conventional wisdom would have it, we’ll need lots of telescopes, time, and teams of astronomers to find them.

But maybe not, according to the B612 Foundation’s Asteroid Institute.

In tandem with Google Cloud, the Asteroid Institute recently announced they’ve spotted 27,500 new asteroids—more than all discoveries worldwide last year—without requiring a single new observation. Instead, over a period of just a few weeks, the team used new software to scour 1.7 billion points of light in some 400,000 images taken over seven years and archived by the National Optical-Infrared Astronomy Research Laboratory (NOIRLab).

To discover new asteroids, astronomers usually need multiple images over several nights (or more) to find moving objects and calculate their orbits. This means they have to make new observations with asteroid discovery in mind. There is also, however, a trove of existing one-time observations made for other purposes, and these are likely packed with photobombing asteroids. But identifying them is difficult and computationally intensive.

Working with the University of Washington, the Asteroid Institute team developed an algorithm, Tracklet-less Heliocentric Orbit Recovery, or THOR, to scan archived images recorded at different times or even by different telescopes. The tool can tell if moving points of light recorded in separate images are the same object. Many of these will be asteroids.

Running THOR on Google Cloud, the team scoured the NOIRLab data and found plenty. Most of the new asteroids are in the main asteroid belt, but more than 100 are near-Earth asteroids. Though the team classified their findings as “high-confidence,” these near-Earth asteroids have not yet been confirmed. They’ll submit their findings to the Minor Planet Center, and ESA and NASA will then verify orbits and assess risk. (The team says they have no reason to believe any pose a risk to Earth.)

While the new software could speed up the pace of discovery, the process still requires volunteers and scientists to manually review the algorithm’s finds. The team plans to use the raw data from the recent run including human review to train an AI model. The hope is that some or all of the manual review process can be automated, making the process even faster.

In the future, the algorithm will go to work on data from the Vera C. Rubin Observatory, a telescope in Chile’s Atacama desert. The telescope, set to begin operations next year, will make twice nightly observations of the sky with asteroid detection in mind. THOR may be able to make discoveries with only one nightly run, freeing the telescope up for other work.

All this is in service of the plan to discover as many Earth-crossing asteroids as possible.

According to NASA, we’ve found over 1.3 million asteroids35,000 of which are near-Earth asteroids. Of these, over 90 percent of the biggest and most dangerous—in the same class as the impact that ended the dinosaurs—have been discovered. Scientists are now filling out the list of smaller but still dangerous asteroids. The vast majority of all known asteroids were catalogued this century. Before that we were flying blind.

While no dangerous asteroids are known to be headed our way soon, space agencies are working on a plan of action—sans nukes and Bruce Willis—should we discover one.

In 2022, NASA rammed the DART spacecraft into an asteroid, Dymorphos, to see if it would deflect the space rock’s orbit. This is a planetary defense strategy known as a “kinetic impactor.” Scientists thought DART might change the asteroid’s orbit by 7 minutes. Instead, DART changed Dymorphos’ orbit by a whopping 33 minutes, much of which was due to recoil produced by a giant plume of material ejected by the impact.

The conclusion of scientists studying the aftermath? “Kinetic impactor technology is a viable technique to potentially defend Earth if necessary.” With the caveat: If we have enough time. Such impacts amount to a nudge, so we need years of advance notice.

Algorithms like THOR could help give us that crucial heads up.

Image Credit: B612 Foundation

AI Can Now Generate Entire Songs on Demand. What Does This Mean for Music as We Know It?

0

In March, we saw the launch of a “ChatGPT for music” called Suno, which uses generative AI to produce realistic songs on demand from short text prompts. A few weeks later, a similar competitor—Udioarrived on the scene.

I’ve been working with various creative computational tools for the past 15 years, both as a researcher and a producer, and the recent pace of change has floored me. As I’ve argued elsewhere, the view that AI systems will never make “real” music like humans do should be understood more as a claim about social context than technical capability.

The argument “sure, it can make expressive, complex-structured, natural-sounding, virtuosic, original music which can stir human emotions, but AI can’t make proper music” can easily begin to sound like something from a Monty Python sketch.

After playing with Suno and Udio, I’ve been thinking about what it is exactly they change—and what they might mean not only for the way professionals and amateur artists create music, but the way all of us consume it.

Expressing Emotion Without Feeling It

Generating audio from text prompts in itself is nothing new. However, Suno and Udio have made an obvious development: from a simple text prompt, they generate song lyrics (using a ChatGPT-like text generator), feed them into a generative voice model, and integrate the “vocals” with generated music to produce a coherent song segment.

This integration is a small but remarkable feat. The systems are very good at making up coherent songs that sound expressively “sung” (there I go anthropomorphizing).

The effect can be uncanny. I know it’s AI, but the voice can still cut through with emotional impact. When the music performs a perfectly executed end-of-bar pirouette into a new section, my brain gets some of those little sparks of pattern-processing joy that I might get listening to a great band.

To me this highlights something sometimes missed about musical expression: AI doesn’t need to experience emotions and life events to successfully express them in music that resonates with people.

Music as an Everyday Language

Like other generative AI products, Suno and Udio were trained on vast amounts of existing work by real humans—and there is much debate about those humans’ intellectual property rights.

Nevertheless, these tools may mark the dawn of mainstream AI music culture. They offer new forms of musical engagement that people will just want to use, to explore, to play with, and actually listen to for their own enjoyment.

AI capable of “end-to-end” music creation is arguably not technology for makers of music, but for consumers of music. For now it remains unclear whether users of Udio and Suno are creators or consumers—or whether the distinction is even useful.

A long-observed phenomenon in creative technologies is that as something becomes easier and cheaper to produce, it is used for more casual expression. As a result, the medium goes from an exclusive high art form to more of an everyday language—think what smartphones have done to photography.

So imagine you could send your father a professionally produced song all about him for his birthday, with minimal cost and effort, in a style of his preference—a modern-day birthday card. Researchers have long considered this eventuality, and now we can do it. Happy birthday, Dad!

Mr Bown’s Blues. Generated by Oliver Bown using Udio [3.75 MB (download)]

Can You Create Without Control?

Whatever these systems have achieved and may achieve in the near future, they face a glaring limitation: the lack of control.

Text prompts are often not much good as precise instructions, especially in music. So these tools are fit for blind search—a kind of wandering through the space of possibilities—but not for accurate control. (That’s not to diminish their value. Blind search can be a powerful creative force.)

Viewing these tools as a practicing music producer, things look very different. Although Udio’s about page says “anyone with a tune, some lyrics, or a funny idea can now express themselves in music,” I don’t feel I have enough control to express myself with these tools.

I can see them being useful to seed raw materials for manipulation, much like samples and field recordings. But when I’m seeking to express myself, I need control.

Using Suno, I had some fun finding the most gnarly dark techno grooves I could get out of it. The result was something I would absolutely use in a track.

Cheese Lovers’ Anthem. Generated by Oliver Bown using Suno [2.75 MB (download)]

 

But I found I could also just gladly listen. I felt no compulsion to add anything or manipulate the result to add my mark.

And many jurisdictions have declared that you won’t be awarded copyright for something just because you prompted it into existence with AI.

For a start, the output depends just as much on everything that went into the AI—including the creative work of millions of other artists. Arguably, you didn’t do the work of creation. You simply requested it.

New Musical Experiences in the No-Man’s Land Between Production and Consumption

So Udio’s declaration that anyone can express themselves in music is an interesting provocation. The people who use tools like Suno and Udio may be considered more consumers of music AI experiences than creators of music AI works, or as with many technological impacts, we may need to come up with new concepts for what they’re doing.

A shift to generative music may draw attention away from current forms of musical culture, just as the era of recorded music saw the diminishing (but not death) of orchestral music, which was once the only way to hear complex, timbrally rich and loud music. If engagement in these new types of music culture and exchange explodes, we may see reduced engagement in the traditional music consumption of artists, bands, radio and playlists.

While it is too early to tell what the impact will be, we should be attentive. The effort to defend existing creators’ intellectual property protections, a significant moral rights issue, is part of this equation.

But even if it succeeds I believe it won’t fundamentally address this potentially explosive shift in culture, and claims that such music might be inferior also have had little effect in halting cultural change historically, as with techno or even jazz, long ago. Government AI policies may need to look beyond these issues to understand how music works socially and to ensure that our musical cultures are vibrant, sustainable, enriching, and meaningful for both individuals and communities.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Pawel Czerwinski / Unsplash

A Massive Study Is Revealing Why Exercise Is So Good for Our Health

0

We all know that exercise is good for us.

A brisk walk of roughly an hour a day can stave off chronic diseases, including heart or blood vessel issues and Type 2 diabetes. Regular exercise delays memory loss due to aging, boosts the immune system, slashes stress, and may even increase lifespan.

For decades, scientists have tried to understand why. Throughout the body, our organs and tissues release a wide variety of molecules during—and even after—exercise to reap its benefits. But no single molecule works alone. The hard part is understanding how they collaborate in networks after exercise.

Enter the Molecular Transducers of Physical Activity Consortium (MoTrPAC) project. Established nearly a decade ago and funded by the National Institutes of Health (NIH), the project aims to create comprehensive molecular maps of how genes and proteins change after exercise in both rodents and people. Rather than focusing on single proteins or genes, the project takes a Google Earth approach—let’s see the overall picture.

It’s not simply for scientific curiosity. If we can find important molecular processes that trigger exercise benefits, we could potentially mimic those reactions using medications and help people who physically can’t work out—a sort of “exercise in a pill.”

This month, the project announced multiple results.

In one study, scientists built an atlas of bodily changes before, during, and after exercise in rats. Altogether, the team collected nearly 9,500 samples across multiple tissues to examine how exercise changes gene expression across the body. Another study detailed differences between sexes after exercise. A third team mapped exercise-related genes to those associated with diseases.

According to the project’s NIH webpage: “When the MoTrPAC study is completed, it will be the largest research study examining the link between exercise and its improvement of human health.”

Work It

Our tissues are chatterboxes. The gut “talks” to the brain through a vast maze of molecules. Muscles pump out proteins to fine-tune immune system defenses. Plasma—the liquid part of blood—can transfer the learning and memory benefits of running when injected into “couch potato” mice and delay cognitive decline.

Over the years, scientists have identified individual molecules and processes that could mediate these effects, but the health benefits are likely due to networks of molecules working together.

“MoTrPAC was launched to fill an important gap in exercise research,” said former NIH director Dr. Francis Collins in a 2020 press release. “It shifts focus from a specific organ or disease to a fundamental understanding of exercise at the molecular level—an understanding that may lead to personalized, prescribed exercise regimens based on an individual’s needs and traits.”

The project has two arms. One observes rodents before, during, and after wheel running to build comprehensive maps of molecular changes due to exercise. These maps aim to capture gene expression alongside metabolic and epigenetic changes in multiple organs.

Another arm will recruit roughly 2,600 healthy volunteers aged 10 to over 60 years old. With a large pool of participants, the team hopes to account for variation between people and even identify differences in the body’s response to exercise based on age, gender, or race. The volunteers will undergo 12 weeks of exercise, either endurance training—such as long-distance running—or weightlifting.

Altogether, the goal is to detect how exercise affects cells at a molecular level in multiple tissue types—blood, fat, and muscle.

Exercise Encyclopedia

Last week, MoTrPAC released an initial wave of findings.

In one study, the group collected blood and 18 different tissue samples from adult rats, both male and female, as they happily ran for a week to two months. The team then screened how the body changes with exercise by comparing rats that work out with “couch potato” rats as a baseline. Physical training increased the rats’ aerobic capacity—the amount of oxygen the body can use—by roughly 17 percent.

Next, the team analyzed the molecular fingerprints of exercise in whole blood, plasma, and 18 solid tissues, including heart, liver, lung, kidney, fat tissue, and the hippocampus, a brain region associated with memory. They used an impressive array of tools that, for example, captured changes in overall gene expression and the epigenetic landscape. Others mapped differences in the body’s proteins, fat, immune system, and metabolism.

“Altogether, datasets were generated from 9,466 assays across 211 combinations of tissues and molecular platforms,” wrote the team.

Using an AI-based method, they integrated the results across time into a comprehensive molecular map. The map pinpointed multiple molecular changes that could dampen liver diseases, inflammatory bowel disease, and protect against heart health and tissue injuries.

All this represents “the first whole-organism molecular map” capturing how exercise changes the body, wrote the team. (All of the data is free to explore.)

Venus and Mars

Most previous studies on exercise in rodents focused on males. What about the ladies?

After analyzing the MoTrPAC database, another study found that exercise changes the body’s molecular signaling differently depending on biological sex.

After running, female rats triggered genes in white fat—the type under the skin—related to insulin signaling and the body’s ability to form fat. Meanwhile, males showed molecular signatures of a ramped up metabolism.

With consistent exercise, male rats rapidly lost fat and weight, whereas females maintained their curves but with improved insulin signaling, which might protect them against heart diseases.

A third study integrated gene expression data collected from exercised rats with disease-relevant gene databases previously found in humans. The goal is to link workout-related genes in a particular organ or tissue with a disease or other health outcome—what the authors call “trait-tissue-gene triplets.” Overall, they found 5,523 triplets “to serve as a valuable starting point for future investigations,” they wrote.

We’re only scratching the surface of the complex puzzle that is exercise. Through extensive mapping efforts, the project aims to eventually tailor workout regimens for people with chronic diseases or identify key “druggable” components that could confer some health benefits of exercise with a pill.

“This is an unprecedented large-scale effort to begin to explore—in extreme detail—the biochemical, physiological, and clinical impact of exercise,” Dr. Russell Tracy at the University of Vermont, a MoTrPAC member, said in a press release.

Image Credit: Fitsum Admasu / Unsplash

This Week’s Awesome Tech Stories From Around the Web (Through May 4)

ARTIFICIAL INTELLIGENCE

Sam Altman Says Helpful Agents Are Poised to Become AI’s Killer Function
James O’Donnell | MIT Technology Review
“Altman, who was visiting Cambridge for a series of events hosted by Harvard and the venture capital firm Xfund, described the killer app for AI as a ‘super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.’ It could tackle some tasks instantly, he said, and for more complex ones it could go off and make an attempt, but come back with questions for you if it needs to.”archive page

COMPUTING

Expect a Wave of Wafer-Scale Computers
Samuel K. Moore | IEEE Spectrum
“At TSMC’s North American Technology Symposium on Wednesday, the company detailed both its semiconductor technology and chip-packaging technology road maps. While the former is key to keeping the traditional part of Moore’s Law going, the latter could accelerate a trend toward processors made from more and more silicon, leading quickly to systems the size of a full silicon wafer. …In 2027, you will get a full-wafer integration that delivers 40 times as much compute power, more than 40 reticles’ worth of silicon, and room for more than 60 high-bandwidth memory chips, TSMC predicts.”

FUTURE

Nick Bostrom Made the World Fear AI. Now He Asks: What if It Fixes Everything?
Will Knight | Wired
“With the publication of his last book, Superintelligence: Paths, Dangers, Strategies, in 2014, Bostrom drew public attention to what was then a fringe idea—that AI would advance to a point where it might turn against and delete humanity. …Bostrom’s new book takes a very different tack. Rather than play the doomy hits, Deep Utopia: Life and Meaning in a Solved World, considers a future in which humanity has successfully developed superintelligent machines but averted disaster.”

TECH

AI Start-Ups Face a Rough Financial Reality Check
Cade Metz, Karen Weise, and  | The New York Times
“The AI revolution, it is becoming clear in Silicon Valley, is going to come with a very big price tag. And the tech companies that have bet their futures on it are scrambling to figure out how to close the gap between those expenses and the profits they hope to make somewhere down the line.”

ROBOTICS

Every Tech Company Wants to Be Like Boston Dynamics
Jacob Stern | The Atlantic
“Clips of robots running faster than Usain Bolt and dancing in sync, among many others, have helped [Boston Dynamics] reach true influencer status. Its videos have now been viewed more than 800 million times, far more than those of much bigger tech companies, such as Tesla and OpenAI. The creator of Black Mirror even admitted that an episode in which killer robot dogs chase a band of survivors across an apocalyptic wasteland was directly inspired by Boston Dynamics’ videos.”

ETHICS

ChatGPT Shows Better Moral Judgment Than a College Undergrad
Kyle Orland | Ars Technica
“In ‘Attributions toward artificial agents in a modified Moral Turing Test’…[Georgia State University] researchers found that morality judgments given by ChatGPT4 were ‘perceived as superior in quality to humans’ along a variety of dimensions like virtuosity and intelligence. But before you start to worry that philosophy professors will soon be replaced by hyper-moral AIs, there are some important caveats to consider.”

SPACE

New Space Company Seeks to Solve Orbital Mobility With High Delta-V Spacecraft
Eric Berger | Ars Technica
“[Portal Space Systems founder, Jeff Thornburg] envisions a fleet of refuelable Supernova vehicles at medium-Earth and geostationary orbit capable of swooping down to various orbits and providing services such as propellant delivery, mobility, and observation for commercial and military satellites. His vision is to provide real-time, responsive capability for existing satellites. If one needs to make an emergency maneuver, a Supernova vehicle could be there within a couple of hours. ‘If we’re going to have a true space economy, that means logistics and supply services,’ he said.”

AUTOMATION

Google’s Waymo Is Expanding Its Self-Driving ‘Robotaxi’ Testing
William Gavin | Quartz
“Waymo plans to soon start testing fully autonomous rides across California’s San Francisco Peninsula, despite criticism and concerns from residents and city officials. In the coming weeks, Waymo employees will begin testing rides without a human driver on city streets north of San Mateo, the company said Friday.”

VIRTUAL REALITY

Ukraine Unveils AI-Generated Foreign Ministry Spokesperson
Agence France-Presse | The Guardian
“Dressed in a dark suit, the spokesperson introduced herself as Victoria Shi, a ‘digital person,’ in a presentation posted on social media. The figure gesticulates with her hands and moves her head as she speaks. The foreign ministry’s press service said that the statements given by Shi would not be generated by AI but ‘written and verified by real people.'”

Image Credit: Drew Walker / Unsplash

This Plastic Is Embedded With Bacterial Spores That Break It Down After It’s Thrown Out

0

Getting microbes to eat plastic is a frequently touted solution to our growing waste problem, but making the approach practical is tricky. A new technique that impregnates plastic with the spores of plastic-eating bacteria could make the idea a reality.

The impact of plastic waste on the environment and our health has gained increasing attention in recent years. The latest round of UN talks aiming for a global treaty to end plastic pollution just concluded in Ottawa, Canada earlier this week, though considerable disagreements remain.

Recycling will inevitably be a crucial ingredient in any plan to deal with the problem. But a 2022 report from the Organization for Economic Cooperation and Development found only 9 percent of plastic waste ever gets recycled. That’s partly due to the fact that existing recycling approaches are energy intensive and time consuming.

This has spurred a search for new approaches, and one of the most promising is the use of bacteria to break down plastics, either by rendering them harmless or using them to produce building blocks that can be repurposed into other valuable materials and chemicals. The main problem with the approach is making sure plastic waste ends up in the same place as these plastic-loving bacteria.

Now, researchers have come up with an ingenious solution: embed microbes in plastic during the manufacturing process. Not only did the approach result in 93 percent of the plastic biodegrading within five months, but it even increased the strength and stretchability of the material.

“What’s remarkable is that our material breaks down even without the presence of additional microbes,” project co-leader Jon Pokorski from the University of California San Diego said in a press release.

“Chances are, most of these plastics will likely not end up in microbially rich composting facilities. So this ability to self-degrade in a microbe-free environment makes our technology more versatile.”

The main challenge when it came to incorporating bacteria into plastics was making sure they survived the high temperatures involved in manufacturing the material. The researchers worked with a soft plastic called thermoplastic polyurethane (TPU), which is used in footwear, cushions, and memory foam. TPU is manufactured by melting pellets of the material at around 275 degrees Fahrenheit and then extruding it into the desired shape.

Given the need to survive these high temperatures, the researchers selected a plastic-eating bacteria called Bacillus subtilis, which can form spores allowing it to survive harsh conditions. Even then, they discovered more than 90 percent of the bacteria were killed in under a minute at those temperatures.

So, the team used a technique called adaptive laboratory evolution to create a more heat-tolerant strain of the bacteria. They dunked the spores in boiling water for increasing lengths of time, collecting the survivors, growing the population back up, and then repeating the process. Over time, this selected for mutations that conferred greater heat tolerance, until the researchers were left with a strain that was able to withstand the manufacturing process.

When they incorporated the spores into the plastic, they were surprised to find the bacteria actually improved the mechanical properties of the material. In essence, the spores acted like steel rebar in concrete, making it harder to break and increasing its stretchability.

To test whether the impregnated spores could help the plastic biodegrade, the researchers took small strips of the plastic and put them in sterilized compost. After five months, they found the strips had lost 93 percent of their mass compared to 44 percent for TPU without spores, which suggests the spores were reactivated by nutrients in the compost and helped degrade the plastic substantially faster.

It’s unclear if the approach would work with other plastics, though the researchers say they plan to find out. There is also a danger the spores could reactivate before the plastic is disposed of, which could shorten the life of any products made with it. Perhaps most crucially, plastics researcher Steve Fletcher from the University of Portsmouth in the UK told the BBC that this kind of technology could distract from efforts to limit plastic waste.

“Care must be taken with potential solutions of this sort, which could give the impression that we should worry less about plastic pollution because any plastic leaking into the environment will quickly, and ideally safely, degrade,” he said. “For the vast majority of plastics, this is not the case.”

Given the scale of the plastic pollution problem today though, any attempt to mitigate the harm should be welcomed. While it’s early days, the prospect of making plastic that can biodegrade itself could go a long way towards tackling the problem.

Image Credit: David Baillot/UC San Diego Jacobs School of Engineering

AI Is Gathering a Growing Amount of Training Data Inside Virtual Worlds

0

To anyone living in a city where autonomous vehicles operate, it would seem they need a lot of practice. Robotaxis travel millions of miles a year on public roads in an effort to gather data from sensors—including cameras, radar, and lidar—to train the neural networks that operate them.

In recent years, due to a striking improvement in the fidelity and realism of computer graphics technology, simulation is increasingly being used to accelerate the development of these algorithms. Waymo, for example, says its autonomous vehicles have already driven some 20 billion miles in simulation. In fact, all kinds of machines, from industrial robots to drones, are gathering a growing amount of their training data and practice hours inside virtual worlds.

According to Gautham Sholingar, a senior manager at Nvidia focused on autonomous vehicle simulation, one key benefit is accounting for obscure scenarios for which it would be nearly impossible to gather training data in the real world.

“Without simulation, there are some scenarios that are just hard to account for. There will always be edge cases which are difficult to collect data for, either because they are dangerous and involve pedestrians or things that are challenging to measure accurately like the velocity of faraway objects. That’s where simulation really shines,” he told me in an interview for Singularity Hub.

While it isn’t ethical to have someone run unexpectedly into a street to train AI to handle such a situation, it’s significantly less problematic for an animated character inside a virtual world.

Industrial use of simulation has been around for decades, something Sholingar pointed out, but a convergence of improvements in computing power, the ability to model complex physics, and the development of the GPUs powering today’s graphics indicate we may be witnessing a turning point in the use of simulated worlds for AI training.

Graphics quality matters because of the way AI “sees” the world.

When a neural network processes image data, it’s converting each pixel’s color into a corresponding number. For black and white images, the number ranges from 0, which indicates a fully black pixel, up to 255, which is fully white, with numbers in between representing some variation of grey. For color images, the widely used RGB (red, green, blue) model can correspond to over 16 million possible colors. So as graphics rendering technology becomes ever more photorealistic, the distinction between pixels captured by real-world cameras and ones rendered in a game engine is falling away.

Simulation is also a powerful tool because it’s increasingly able to generate synthetic data for sensors beyond just cameras. While high-quality graphics are both appealing and familiar to human eyes, which is useful in training camera sensors, rendering engines are also able to generate radar and lidar data as well. Combining these synthetic datasets inside a simulation allows the algorithm to train using all the various types of sensors commonly used by AVs.

Due to their expertise in producing the GPUs needed to generate high-quality graphics, Nvidia have positioned themselves as leaders in the space. In 2021, the company launched Omniverse, a simulation platform capable of rendering high-quality synthetic sensor data and modeling real-world physics relevant to a variety of industries. Now, developers are using Omniverse to generate sensor data to train autonomous vehicles and other robotic systems.

In our discussion, Sholingar described some specific ways these types of simulations may be useful in accelerating development. The first involves the fact that with a bit of retraining, perception algorithms developed for one type of vehicle can be re-used for other types as well. However, because the new vehicle has a different sensor configuration, the algorithm will be seeing the world from a new point of view, which can reduce its performance.

“Let’s say you developed your AV on a sedan, and you need to go to an SUV. Well, to train it then someone must change all the sensors and remount them on an SUV. That process takes time, and it can be expensive. Synthetic data can help accelerate that kind of development,” Sholingar said.

Another area involves training algorithms to accurately detect faraway objects, especially in highway scenarios at high speeds. Since objects over 200 meters away often appear as just a few pixels and can be difficult for humans to label, there isn’t typically enough training data for them.

“For the far ranges, where it’s hard to annotate the data accurately, our goal was to augment those parts of the dataset,” Sholingar said. “In our experiment, using our simulation tools, we added more synthetic data and bounding boxes for cars at 300 meters and ran experiments to evaluate whether this improves our algorithm’s performance.”

According to Sholingar, these efforts allowed their algorithm to detect objects more accurately beyond 200 meters, something only made possible by their use of synthetic data.

While many of these developments are due to better visual fidelity and photorealism, Sholingar also stressed this is only one aspect of what makes capable real-world simulations.

“There is a tendency to get caught up in how beautiful the simulation looks since we see these visuals, and it’s very pleasing. What really matters is how the AI algorithms perceive these pixels. But beyond the appearance, there are at least two other major aspects which are crucial to mimicking reality in a simulation.”

First, engineers need to ensure there is enough representative content in the simulation. This is important because an AI must be able to detect a diversity of objects in the real world, including pedestrians with different colored clothes or cars with unusual shapes, like roof racks with bicycles or surfboards.

Second, simulations have to depict a wide range of pedestrian and vehicle behavior. Machine learning algorithms need to know how to handle scenarios where a pedestrian stops to look at their phone or pauses unexpectedly when crossing a street. Other vehicles can behave in unexpected ways too, like cutting in close or pausing to wave an oncoming vehicle forward.

“When we say realism in the context of simulation, it often ends up being associated only with the visual appearance part of it, but I usually try to look at all three of these aspects. If you can accurately represent the content, behavior, and appearance, then you can start moving in the direction of being realistic,” he said.

It also became clear in our conversation that while simulation will be an increasingly valuable tool for generating synthetic data, it isn’t going to replace real-world data collection and testing.

“We should think of simulation as an accelerator to what we do in the real world. It can save time and money and help us with a diversity of edge-case scenarios, but ultimately it is a tool to augment datasets collected from real-world data collection,” he said.

Beyond Omniverse, the wider industry of helping “things that move” develop autonomy is undergoing a shift toward simulation. Tesla announced they’re using similar technology to develop automation in Unreal Engine, while Canadian startup, Waabi, is taking a simulation-first approach to training their self-driving software. Microsoft, meanwhile, has experimented with a similar tool to train autonomous drones, although the project was recently discontinued.

While training and testing in the real world will remain a crucial part of developing autonomous systems, the continued improvement of physics and graphics engine technology means that virtual worlds may offer a low-stakes sandbox for machine learning algorithms to mature into functional tools that can power our autonomous future.

Image Credit: Nvidia

Mind-Bending Math Could Stop Quantum Hackers—but Few Understand It

0

Imagine the tap of a card that bought you a cup of coffee this morning also let a hacker halfway across the world access your bank account and buy themselves whatever they liked. Now imagine it wasn’t a one-off glitch, but it happened all the time: Imagine the locks that secure our electronic data suddenly stopped working.

This is not a science fiction scenario. It may well become a reality when sufficiently powerful quantum computers come online. These devices will use the strange properties of the quantum world to untangle secrets that would take ordinary computers more than a lifetime to decipher.

We don’t know when this will happen. However, many people and organizations are already concerned about so-called “harvest now, decrypt later” attacks, in which cybercriminals or other adversaries steal encrypted data now and store it away for the day when they can decrypt it with a quantum computer.

As the advent of quantum computers grows closer, cryptographers are trying to devise new mathematical schemes to secure data against their hypothetical attacks. The mathematics involved is highly complex—but the survival of our digital world may depend on it.

‘Quantum-Proof’ Encryption

The task of cracking much current online security boils down to the mathematical problem of finding two numbers that, when multiplied together, produce a third number. You can think of this third number as a key that unlocks the secret information. As this number gets bigger, the amount of time it takes an ordinary computer to solve the problem becomes longer than our lifetimes.

Future quantum computers, however, should be able to crack these codes much more quickly. So the race is on to find new encryption algorithms that can stand up to a quantum attack.

The US National Institute of Standards and Technology has been calling for proposed “quantum-proof” encryption algorithms for years, but so far few have withstood scrutiny. (One proposed algorithm, called Supersingular Isogeny Key Encapsulation, was dramatically broken in 2022 with the aid of Australian mathematical software called Magma, developed at the University of Sydney.)

The race has been heating up this year. In February, Apple updated the security system for the iMessage platform to protect data that may be harvested for a post-quantum future.

Two weeks ago, scientists in China announced they had installed a new “encryption shield” to protect the Origin Wukong quantum computer from quantum attacks.

Around the same time, cryptographer Yilei Chen announced he had found a way quantum computers could attack an important class of algorithms based on the mathematics of lattices, which were considered some of the hardest to break. Lattice-based methods are part of Apple’s new iMessage security, as well as two of the three frontrunners for a standard post-quantum encryption algorithm.

What Is a Lattice-Based Algorithm?

A lattice is an arrangement of points in a repeating structure, like the corners of tiles in a bathroom or the atoms in a diamond crystal. The tiles are two dimensional and the atoms in diamond are three dimensional, but mathematically we can make lattices with many more dimensions.

Most lattice-based cryptography is based on a seemingly simple question: If you hide a secret point in such a lattice, how long will it take someone else to find the secret location starting from some other point? This game of hide and seek can underpin many ways to make data more secure.

A variant of the lattice problem called “learning with errors” is considered to be too hard to break even on a quantum computer. As the size of the lattice grows, the amount of time it takes to solve is believed to increase exponentially, even for a quantum computer.

The lattice problem—like the problem of finding the factors of a large number on which so much current encryption depends—is closely related to a deep open problem in mathematics called the “hidden subgroup problem.”

Yilei Chen’s approach suggested quantum computers may be able to solve lattice-based problems more quickly under certain conditions. Experts scrambled to check his results—and rapidly found an error. After the error was discovered, Chen published an updated version of his paper describing the flaw.

Despite this discovery, Chen’s paper has made many cryptographers less confident in the security of lattice-based methods. Some are still assessing whether Chen’s ideas can be extended to new pathways for attacking these methods.

More Mathematics Required

Chen’s paper set off a storm in the small community of cryptographers who are equipped to understand it. However, it received almost no attention in the wider world—perhaps because so few people understand this kind of work or its implications.

Last year, when the Australian government published a national quantum strategy to make the country “a leader of the global quantum industry” where “quantum technologies are integral to a prosperous, fair and inclusive Australia,” there was an important omission: It didn’t mention mathematics at all.

Australia does have many leading experts in quantum computing and quantum information science. However, making the most of quantum computers—and defending against them—will require deep mathematical training to produce new knowledge and research.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ZENG YILI / Unsplash

Scientists Find a Surprising Way to Transform A and B Blood Types Into Universal Blood

0

Blood transfusions save lives. In the US alone, people receive around 10 million units each year. But blood banks are always short in supply—especially when it comes to the “universal donor” type O.

Surprisingly, the gut microbiome may hold a solution for boosting universal blood supplies by chemically converting other blood types into the universal O.

Infusing the wrong blood type—say, type A to type B—triggers deadly immune reactions. Type O blood, however, is compatible with nearly everyone. It’s in especially high demand following hurricanes, earthquakes, wildfires, and other crises because doctors have to rapidly treat as many people as possible.

Sometimes, blood banks have an imbalance of different blood types—for example, too much type A, not enough universal O. This week, a team from Denmark and Sweden discovered a cocktail of enzymes that readily converts type A and type B blood into the universal donor. Found in gut bacteria, the enzymes chew up an immune-stimulating sugar molecule dotted on the surfaces of type A and B blood cells, removing their tendency to spark an immune response.

Compared to previous attempts, the blend of enzymes converted A and B blood types to type O blood with “remarkably high efficiencies,” the authors wrote.

Wardrobe Change

Blood types can be characterized in multiple ways, but roughly speaking, the types come in four main forms: A, B, AB, and O.

These types are distinguished by what kinds of sugar molecules—called antigens—cover the surfaces of red blood cells. Antigens can trigger immune rejection if mismatched. Type A blood has A antigens; type B has B antigens; type AB has both. Type O has neither.

This is why type O blood can be used for most people. It doesn’t normally trigger an immune response and is highly coveted during emergencies when it’s difficult to determine a person’s blood type. One obvious way to boost type O stock is to recruit more donors, but that’s not always possible. As a workaround, scientists have tried to artificially produce type O blood using stem cell technology. While successful in the lab, it’s expensive and hard to scale up for real-world demands.

An alternative is removing the A and B antigens from donated blood. First proposed in the 1980s, this approach uses enzymes to break down the immune-stimulating sugar molecules. Like licking an ice cream cone, as the antigens gradually melt away, the blood cells are stripped of their A or B identity, eventually transforming into the universal O blood type.

The technology sounds high-tech, but breaking down sugars is something our bodies naturally do every day, thanks to microbes in the gut that happily digest our food. This got scientists wondering: Can we hunt down enzymes in the digestive track to convert blood types?

Over a half decade ago, a team from the University of British Columbia made headlines by using bacterial enzymes found in the gut microbiome to transform type A blood to type O. Some gut bugs eat away at mucus—a slimy substance made of sugary molecules covering the gut. These mucus linings are molecularly similar to the antigens on red blood cells.

So, digestive enzymes from gut microbes could potentially chomp away A and B antigens.

In one test, the team took samples of human poop (yup), which carry enzymes from the gut microbiome and looked for DNA that could break down red blood cell sugar chains.

They eventually discovered two enzymes from a single bacterial strain. Tested in human blood, the duo readily stripped away type A antigens, converting it into universal type O.

The study was a proof of concept for transforming one blood type into another, with potentially real-world implications. Type A blood—common in Europe and the US—makes up roughly one-third of the supply of donations. A technology that converts it to universal O could boost blood transplant resources in this part of the world.

“This is a first, and if these data can be replicated, it is certainly a major advance,” Dr. Harvey Klein at the National Institutes of Health’s Clinical Center, who was not involved in the work,  told Science at the time.

There’s one problem though. Converted blood doesn’t always work.

Let’s Talk ABO+

When tested in clinical trials, converted blood has raised safety concerns. Even when removing A or B antigens completely from donated blood, small hints from earlier studies found an immune mismatch between the transformed donor blood and the recipient. In other words, the engineered O blood sometimes still triggered an immune response.

Why?

There’s more to blood types than classic ABO. Type A is composed of two different subtypes—one with higher A antigen levels than the other. Type B, common in people of Asian and African descent, also comes in “extended” forms. These recently discovered sugar chains are longer and harder to break down than in the classic versions. Called “extended antigens,” they could be why some converted blood still stimulates the immune system after transfusion.

The new study tackled these extended forms by again peeking into gut bacteria DNA. One bacterial strain, A. muciniphila, stood out. These bugs contain enzymes that work like a previously discovered version that chops up type A and B antigens, but surprisingly, they also strip away extended versions of both antigens.

These enzymes weren’t previously known to science, with just 30 percent similarity when compared to a previous benchmark enzyme that cuts up B and extended B antigens.

Using cells from different donors, the scientists engineered an enzyme soup that rapidly wiped out blood antigens. The strategy is “unprecedented,” wrote the team.

Although the screen found multiple enzymes capable of blood type conversion, each individually had limited effects. But when mixed and matched, the recipe transformed donated B type cells into type O, with limited immune responses when mixed with other blood types.

A similar strategy yielded three different enzymes to cut out the problematic A antigen and, in turn, transform the blood to type O. Some people secrete the antigen into other bodily fluids—for example, saliva, sweat, or tears. Others, dubbed non-secreters, have less of these antigens floating around their bodies. Using blood donated from both secreters and non-secreters, the team treated red blood cells to remove the A antigen and its extended versions.

When mixed with other blood types, the enzyme cocktail lowered their immune response, although with lower efficacy than cells transformed from type B to O.

By mapping the structures of these enzymes, the team found some parts increased their ability to chop up sugar chains. Focusing on these hot-spot structures, scientists are set to hunt down other naturally-derived enzymes—or use AI to engineer ones with better efficacy and precision.

The system still needs to be tested in humans. And the team didn’t address other blood antigens, such as the Rh system, which is what makes blood types positive or negative. Still, bacterial enzymes appear to be an unexpected but promising way to engineer universal blood.

Image Credit: Zeiss Microscopy / Flickr

This Week’s Awesome Tech Stories From Around the Web (Through April 27)

ARTIFICIAL INTELLIGENCE

Meta’s Open Source Llama 3 Is Already Nipping at OpenAI’s Heels
Will Knight | Wired
“OpenAI changed the world with ChatGPT, setting off a wave of AI investment and drawing more than 2 million developers to its cloud APIs. But if open source models prove competitive, developers and entrepreneurs may decide to stop paying to access the latest model from OpenAI or Google and use Llama 3 or one of the other increasingly powerful open source models that are popping up.”

BIOTECH

‘Real Hope’ for Cancer Cure as Personal mRNA Vaccine for Melanoma Trialed
Andrew Gregory | The Guardian
“Experts are testing new jabs that are custom-built for each patient and tell their body to hunt down cancer cells to prevent the disease ever coming back. A phase 2 trial found the vaccines dramatically reduced the risk of the cancer returning in melanoma patients. Now a final, phase 3, trial has been launched and is being led by University College London Hospitals NHS Foundation Trust (UCLH). Dr Heather Shaw, the national coordinating investigator for the trial, said the jabs had the potential to cure people with melanoma and are being tested in other cancers, including lung, bladder and kidney.”

DIGITAL MEDIA

An AI Startup Made a Hyperrealistic Deepfake of Me That’s So Good It’s Scary
Melissa Heikkilä | MIT Technology Review
“Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley.”

ENERGY

Nuclear Fusion Experiment Overcomes Two Key Operating Hurdles
Matthew Sparkes | New Scientist
“A nuclear fusion reaction has overcome two key barriers to operating in a ‘sweet spot’ needed for optimal power production: boosting the plasma density and keeping that denser plasma contained. The milestone is yet another stepping stone towards fusion power, although a commercial reactor is still probably years away.”

FUTURE

Daniel Dennett: ‘ Why Civilization Is More Fragile Than We Realized’
Tom Chatfield | BBC
“[Dennett’s] warning was not of a takeover by some superintelligence, but of a threat he believed that nonetheless could be existential for civilization, rooted in the vulnerabilities of human nature. ‘If we turn this wonderful technology we have for knowledge into a weapon for disinformation,’ he told me, ‘we are in deep trouble.’ Why? ‘Because we won’t know what we know, and we won’t know who to trust, and we won’t know whether we’re informed or misinformed. We may become either paranoid and hyper-skeptical, or just apathetic and unmoved. Both of those are very dangerous avenues. And they’re upon us.'”

ENVIRONMENT

California Just Went 9.25 Hours Using Only Renewable Energy
Adele Peters | Fast Company
“Last Saturday, as 39 million Californians went about their daily lives—taking showers, doing laundry, or charging their electric cars—the whole state ran on 100% clean electricity for more than nine hours. The same thing happened on Sunday, as the state was powered without fossil fuels for more than eight hours. It was the ninth straight day that solar, wind, hydropower, geothermal, and battery storage fully powered the electric grid for at least some portion of the time. Over the last six and a half weeks, that’s happened nearly every day. In some cases, it’s just for 15 minutes. But often it’s for hours at a time.”

archive pa

TECH

AI Hype Is Deflating. Can AI Companies Find a Way to Turn a Profit?
Gerrit De Vynck | The Washington Post
“Some once-promising start-ups have cratered, and the suite of flashy products launched by the biggest players in the AI race—OpenAI, Microsoft, Google and Meta—have yet to upend the way people work and communicate with one another. While money keeps pouring into AI, very few companies are turning a profit on the tech, which remains hugely expensive to build and run. The road to widespread adoption and business success is still looking long, twisty and full of roadblocks, say tech executives, technologists and financial analysts.”

ARTIFICIAL INTELLIGENCE

Apple Releases Eight Small AI Language Models Aimed at On-Device Use
Benj Edwards | Ars Technica
“In the world of AI, what might be called ‘small language models’ have been growing in popularity recently because they can be run on a local device instead of requiring data center-grade computers in the cloud. On Wednesday, Apple introduced a set of tiny source-available AI language models called OpenELM that are small enough to run directly on a smartphone. They’re mostly proof-of-concept research models for now, but they could form the basis of future on-device AI offerings from Apple.”

SPACE

If Starship Is Real, We’re Going to Need Big Cargo Movers on the Moon and Mars
Eric Berger | Ars Technica
“Unloading tons of cargo on the Moon may seem like a preposterous notion. During Apollo, mass restrictions were so draconian that the Lunar Module could carry two astronauts, their spacesuits, some food, and just 300 pounds (136 kg) of scientific payload down to the lunar surface. By contrast, Starship is designed to carry 100 tons, or more, to the lunar surface in a single mission. This is an insane amount of cargo relative to anything in spaceflight history, but that’s the future that [Jaret] Matthews is aiming toward.”

Image Credit: CARTIST / Unsplash

How Quantum Computers Could Illuminate the Full Range of Human Genetic Diversity

0

Genomics is revolutionizing medicine and science, but current approaches still struggle to capture the breadth of human genetic diversity. Pangenomes that incorporate many people’s DNA could be the answer, and a new project thinks quantum computers will be a key enabler.

When the Human Genome Project published its first reference genome in 2001, it was based on DNA from just a handful of humans. While less than one percent of our DNA varies from person to person, this can still leave important gaps and limit what we can learn from genomic analyses.

That’s why the concept of a pangenome has become increasingly popular. This refers to a collection of genomic sequences from many different people that have been merged to cover a much greater range of human genetic possibilities.

Assembling these pangenomes is tricky though, and their size and complexity make carrying out computational analyses on them daunting. That’s why the University of Cambridge, the Wellcome Sanger Institute, and the European Molecular Biology Laboratory’s European Bioinformatics Institute have teamed up to see if quantum computers can help.

“We’ve only just scratched the surface of both quantum computing and pangenomics,” David Holland of the Wellcome Sanger Institute said in a press release. “So to bring these two worlds together is incredibly exciting. We don’t know exactly what’s coming, but we see great opportunities for major new advances.”

Pangenomes could be crucial for discovering how different genetic variants impact human biology, or that of other species. The current reference genome is used as a guide to assemble genetic sequences, but due to the variability of human genomes there are often significant chunks of DNA that don’t match up. A pangenome would capture a lot more of that diversity, making it easier to connect the dots and giving us a more complete view of possible human genomes.

Despite their power, pangenomes are difficult to work with. While the genome of a single person is just a linear sequence of genetic data, a pangenome is a complex network that tries to capture all the ways in which its constituent genomes do and don’t overlap.

These so-called “sequence graphs” are challenging to construct and even more challenging to analyze. And it will require high levels of computational power and novel techniques to make use of the rich representation of human diversity contained within.

That’s where this new project sees quantum computers lending a hand. Relying on the quirks of quantum mechanics, they can tackle certain computational problems that are near impossible for classical computers.

While there’s still considerable uncertainty about what kinds of calculations quantum computers will actually be able to run, many hope they will dramatically improve our ability to solve problems relating to complex systems with large numbers of variables. This new project is aimed at developing quantum algorithms that speed up both the production and analysis of pangenomes, though the researchers admit it’s early days.

“We’re starting from scratch because we don’t even know yet how to represent a pangenome in a quantum computing environment,” David Yuan from the European Bioinformatics Institute said in the press release. “If you compare it to the first moon landings, this project is the equivalent of designing a rocket and training the astronauts.”

The project has been awarded $3.5 million, which will be used to develop new algorithms and then test them on simulated quantum hardware using supercomputers. The researchers think the tools they develop could lead to significant breakthroughs in personalized medicine. They could also be applied to pangenomes of viruses and bacteria, improving our ability to track and manage disease outbreaks.

Given its exploratory nature and the difficulty of getting quantum computers to do anything practical, it could be some time before the project bears fruit. But if they succeed, the researchers could significantly expand our ability to make sense of the genes that shape our lives.

Image Credit: Gerd AltmannPixabay

This AI Just Designed a More Precise CRISPR Gene Editor for Human Cells From Scratch

0

CRISPR has revolutionized science. AI is now taking the gene editor to the next level.

Thanks to its ability to accurately edit the genome, CRISPR tools are now widely used in biotechnology and across medicine to tackle inherited diseases. In late 2023, a therapy using the Nobel Prize-winning tool gained approval from the FDA to treat sickle cell disease. CRISPR has also enabled CAR T cell therapy to battle cancers and been used to lower dangerously high cholesterol levels in clinical trials.

Outside medicine, CRISPR tools are changing the agricultural landscape, with projects ongoing to engineer hornless bulls, nutrient-rich tomatoes, and livestock and fish with more muscle mass.

Despite its real-world impact, CRISPR isn’t perfect. The tool snips both strands of DNA, which can cause dangerous mutations. It also can inadvertently nip unintended areas of the genome and trigger unpredictable side effects.

CRISPR was first discovered in bacteria as a defense mechanism, suggesting that nature hides a bounty of CRISPR components. For the past decade, scientists have screened different natural environments—for example, pond scum—to find other versions of the tool that could potentially increase its efficacy and precision. While successful, this strategy depends on what nature has to offer. Some benefits, such as a smaller size or greater longevity in the body, often come with trade-offs like lower activity or precision.

Rather than relying on evolution, can we fast-track better CRISPR tools with AI?

This week, Profluent, a startup based in California, outlined a strategy that uses AI to dream up a new universe of CRISPR gene editors. Based on large language models—the technology behind the popular ChatGPT—the AI designed several new gene-editing components.

In human cells, the components meshed to reliably edit targeted genes. The efficiency matched classic CRISPR, but with far more precision. The most promising editor, dubbed OpenCRISPR-1, could also precisely swap out single DNA letters—a technology called base editing—with an accuracy that rivals current tools.

“We demonstrate the world’s first successful editing of the human genome using a gene editing system where every component is fully designed by AI,” wrote the authors in a blog post.

Match Made in Heaven

CRISPR and AI have had a long romance.

The CRISPR recipe has two main parts: A “scissor” Cas protein that cuts or nicks the genome and a “bloodhound” RNA guide that tethers the scissor protein to the target gene.

By varying these components, the system becomes a toolbox, with each setup tailored to perform a specific type of gene editing. Some Cas proteins cut both strands of DNA; others give just one strand a quick snip. Alternative versions can also cut RNA, a type of genetic material found in viruses, and can be used as diagnostic tools or antiviral treatments.

Different versions of Cas proteins are often found by searching natural environments or through a process called direct evolution. Here, scientist rationally swap out some parts of the Cas protein to potentially boost efficacy.

It’s a highly time-consuming process. Which is where AI comes in.

Machine learning has already helped predict off-target effects in CRISPR tools. It’s also homed in on smaller Cas proteins to make downsized editors easier to deliver into cells.

Profluent used AI in a novel way: Rather than boosting current systems, they designed CRISPR components from scratch using large language models.

The basis of ChatGPT and DALL-E, these models launched AI into the mainstream. They learn from massive amounts of text, images, music, and other data to distill patterns and concepts. It’s how the algorithms generate images from a single text prompt—say, “unicorn with sunglasses dancing over a rainbow”—or mimic the music style of a given artist.

The same technology has also transformed the protein design world. Like words in a book, proteins are strung from individual molecular “letters” into chains, which then fold in specific ways to make the proteins work. By feeding protein sequences into AI, scientists have already fashioned antibodies and other functional proteins unknown to nature.

“Large generative protein language models capture the underlying blueprint of what makes a natural protein functional,” wrote the team in the blog post. “They promise a shortcut to bypass the random process of evolution and move us towards intentionally designing proteins for a specific purpose.”

Do AIs Dream of CRISPR Sheep?

All large language models need training data. The same is true for an algorithm that generates gene editors. Unlike text, images, or videos that can be easily scraped online, a CRISPR database is harder to find.

The team first screened over 26 terabytes of data about current CRISPR systems and built a CRISPR-Cas atlas—the most extensive to date, according to the researchers.

The search revealed millions of CRISPR-Cas components. The team then trained their ProGen2 language model—which was fine-tuned for protein discovery—using the CRISPR atlas.

The AI eventually generated four million protein sequences with potential Cas activity. After filtering out obvious deadbeats with another computer program, the team zeroed in on a new universe of Cas “protein scissors.”

The algorithm didn’t just dream up proteins like Cas9. Cas proteins come in families, each with its own quirks in gene-editing ability. The AI also designed proteins resembling Cas13, which targets RNA, and Cas12a, which is more compact than Cas9.

Overall, the results expanded the universe of potential Cas proteins nearly five-fold. But do any of them work?

Hello, CRISPR World

For the next test, the team focused on Cas9, because it’s already widely used in biomedical and other fields. They trained the AI on roughly 240,000 different Cas9 protein structures from multiple types of animals, with the goal of generating similar proteins to replace natural ones—but with higher efficacy or precision.

The initial results were surprising: The generated sequences, roughly a million of them, were totally different than natural Cas9 proteins. But using DeepMind’s AlphaFold2, a protein structure prediction AI, the team found the generated protein sequences could adopt similar shapes.

Cas proteins can’t function without a bloodhound RNA guide. With the CRISPR-Cas atlas, the team also trained AI to generate an RNA guide when given a protein sequence.

The result is a CRISPR gene editor with both components—Cas protein and RNA guide— designed by AI. Dubbed OpenCRISPR-1, its gene editing activity was similar to classic CRISPR-Cas9 systems when tested in cultured human kidney cells. Surprisingly, the AI-generated version slashed off-target editing by roughly 95 percent.

With a few tweaks, OpenCRISPR-1 could also perform base editing, which can change single DNA letters. Compared to classic CRISPR, base editing is likely more precise as it limits damage to the genome. In human kidney cells, OpenCRISPR-1 reliably converted one DNA letter to another in three sites across the genome, with an editing rate similar to current base editors.

To be clear, the AI-generated CRISPR tools have only been tested in cells in a dish. For treatments to reach the clinic, they’d need to undergo careful testing for safety and efficacy in living creatures, which can take a long time.

Profluent is openly sharing OpenCRISPR-1 with researchers and commercial groups but keeping the AI that created the tool in-house. “We release OpenCRISPR-1 publicly to facilitate broad, ethical usage across research and commercial applications,” they wrote.

As a preprint, the paper describing their work has yet to be analyzed by expert peer reviewers. Scientists will also have to show OpenCRISPR-1 or variants work in multiple organisms, including plants, mice, and humans. But tantalizingly, the results open a new avenue for generative AI—one that could fundamentally change our genetic blueprint.

Image Credit: Profluent

The Crucial Building Blocks of Life on Earth Form More Easily in Outer Space

The origin of life on Earth is still enigmatic, but we are slowly unraveling the steps involved and the necessary ingredients. Scientists believe life arose in a primordial soup of organic chemicals and biomolecules on the early Earth, eventually leading to actual organisms.

It’s long been suspected that some of these ingredients may have been delivered from space. Now a new study, published in Science Advances, shows that a special group of molecules, known as peptides, can form more easily under the conditions of space than those found on Earth. That means they could have been delivered to the early Earth by meteorites or comets—and that life may be able to form elsewhere, too.

The functions of life are upheld in our cells (and those of all living beings) by large, complex carbon-based (organic) molecules called proteins. How to make the large variety of proteins we need to stay alive is encoded in our DNA, which is itself a large and complex organic molecule.

However, these complex molecules are assembled from a variety of small and simple molecules such as amino acids—the so-called building blocks of life.

To explain the origin of life, we need to understand how and where these building blocks form and under what conditions they spontaneously assemble themselves into more complex structures. Finally, we need to understand the step that enables them to become a confined, self-replicating system—a living organism.

This latest study sheds light on how some of these building blocks might have formed and assembled and how they ended up on Earth.

Steps to Life

DNA is made up of about 20 different amino acids. Like letters of the alphabet, these are arranged in DNA’s double helix structure in different combinations to encrypt our genetic code.

Peptides are also an assemblage of amino acids in a chain-like structure. Peptides can be made up of as little as two amino acids, but also range to hundreds of amino acids.

The assemblage of amino acids into peptides is an important step because peptides provide functions such as catalyzing, or enhancing, reactions that are important to maintaining life. They are also candidate molecules that could have been further assembled into early versions of membranes, confining functional molecules in cell-like structures.

However, despite their potentially important role in the origin of life, it was not so straightforward for peptides to form spontaneously under the environmental conditions on the early Earth. In fact, the scientists behind the current study had previously shown that the cold conditions of space are actually more favorable to the formation of peptides.

Interstellar medium.
The interstellar medium. Image Credit: Charles Carter/Keck Institute for Space Studies

In the very low density clouds of molecules and dust particles in a part of space called the interstellar medium (see above), single atoms of carbon can stick to the surfaces of dust grains together with carbon monoxide and ammonia molecules. They then react to form amino acid-like molecules. When such a cloud becomes denser and dust particles also start to stick together, these molecules can assemble into peptides.

In their new study, the scientists look at the dense environment of dusty disks, from which a new solar system with a star and planets emerges eventually. Such disks form when clouds suddenly collapse under the force of gravity. In this environment, water molecules are much more prevalent—forming ice on the surfaces of any growing agglomerates of particles that could inhibit the reactions that form peptides.

By emulating the reactions likely to occur in the interstellar medium in the laboratory, the study shows that, although the formation of peptides is slightly diminished, it is not prevented. Instead, as rocks and dust combine to form larger bodies such as asteroids and comets, these bodies heat up and allow for liquids to form. This boosts peptide formation in these liquids, and there’s a natural selection of further reactions resulting in even more complex organic molecules. These processes would have occurred during the formation of our own solar system.

Many of the building blocks of life such as amino acids, lipids, and sugars can form in the space environment. Many have been detected in meteorites.

Because peptide formation is more efficient in space than on Earth, and because they can accumulate in comets, their impacts on the early Earth might have delivered loads that boosted the steps towards the origin of life on Earth.

So, what does all this mean for our chances of finding alien life? Well, the building blocks for life are available throughout the universe. How specific the conditions need to be to enable them to self-assemble into living organisms is still an open question. Once we know that, we’ll have a good idea of how widespread, or not, life might be.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Aldebaran S / Unsplash

A Universal Vaccine Against Any Viral Variant? A New Study Suggests It’s Possible

0

From Covid boosters to annual flu shots, most of us are left wondering: Why so many, so often?

There’s a reason to update vaccines. Viruses rapidly mutate, which can help them escape the body’s immune system, putting previously vaccinated people at risk of infection. Using AI modeling, scientists have increasingly been able to predict how viruses will evolve. But they mutate fast, and we’re still playing catch up.

An alternative strategy is to break the cycle with a universal vaccine that can train the body to recognize a virus despite mutation. Such a vaccine could eradicate new flu strains, even if the virus has transformed into nearly unrecognizable forms. The strategy could also finally bring a vaccine for the likes of HIV, which has so far notoriously evaded decades of efforts.

This month, a team from UC California Riverside, led by Dr. Shou-Wei Ding, designed a vaccine that unleashed a surprising component of the body’s immune system against invading viruses.

In baby mice without functional immune cells to ward off infections, the vaccine defended against lethal doses of a deadly virus. The protection lasted at least 90 days after the initial shot.

The strategy relies on a controversial theory. Most plants and fungi have an innate defense against viruses that chops up their genetic material. Called RNA interference (RNAi), scientists have long debated whether the same mechanism exists in mammals—including humans.

“It’s an incredible system because it can be adapted to any virus,” Dr. Olivier Voinnet at the Swiss Federal Institute of Technology, who championed the theory with Ding, told Nature in late 2013.

A Hidden RNA Universe

RNA molecules are usually associated with the translation of genes into proteins.

But they’re not just biological messengers. A wide array of small RNA molecules roam our cells. Some shuttle protein components through the cell during the translation of DNA. Others change how DNA is expressed and may even act as a method of inheritance.

But fundamental to immunity are small interfering RNA molecules, or siRNAs. In plants and invertebrates, these molecules are vicious defenders against viral attacks. To replicate, viruses need to hijack the host cell’s machinery to copy their genetic material—often, it’s RNA. The invaded cells recognize the foreign genetic material and automatically launch an attack.

During this attack, called RNA interference, the cell chops the invading viruses’ RNA genome into tiny chunks–siRNA. The cell then spews these viral siRNA molecules into the body to alert the immune system. The molecules also directly grab onto the invading viruses’ genome, blocking it from replicating.

Here’s the kicker: Vaccines based on antibodies usually target one or two locations on a virus, making them vulnerable to mutation should those locations change their makeup. RNA interference generates thousands of siRNA molecules that cover the entire genome—even if one part of a virus mutates, the rest is still vulnerable to the attack.

This powerful defense system could launch a new generation of vaccines. There’s just one problem. While it’s been observed in plants and flies, whether it exists in mammals has been highly controversial.

“We believe that RNAi has been antiviral for hundreds of millions of years,” Ding told Nature in 2013. “Why would we mammals dump such an effective defense?”

Natural Born Viral Killers

In the 2013 study in Science, Ding and colleagues suggested mammals also have an antiviral siRNA mechanism—it’s just being repressed by a gene carried by most viruses. Dubbed B2, the gene acts like a “brake,” smothering any RNA interference response from host cells by destroying their ability to make siRNA snippets.

Getting rid of B2 should kick RNA interference back into gear. To prove the theory, the team genetically engineered a virus without a functioning B2 gene and tried to infect hamster cells and immunocompromised baby mice. Called Nodamura virus, it’s transmitted by mosquitoes in the wild and is often deadly.

But without B2, even a lethal dose of the virus lost its infectious power. The baby mice rapidly generated a hefty dose of siRNA molecules to clear out the invaders. As a result, the infection never took hold, and the critters—even when already immunocompromised—survived.

“I truly believe that the RNAi response is relevant to at least some viruses that infect mammals,” said Ding at the time.

New-Age Vaccines

Many vaccines contain either a dead or a living but modified version of a virus to train the immune system. When faced with the virus again, the body produces T cells to kill off the target, B cells that pump out antibodies, and other immune “memory” cells to alert against future attacks. But their effects don’t always last, especially if a virus mutates.

Rather than rallying T and B cells, triggering the body’s siRNA response offers another type of immune defense. This can be done by deleting the B2 gene in live viruses. These viruses can be formulated into a new type of vaccine, which the team has been working to develop, relying on RNA interference to ward off invaders. The resulting flood of siRNA molecules triggered by the vaccine would, in theory, also provide some protection against future infection.

“If we make a mutant virus that cannot produce the protein to suppress our RNAi [RNA interference], we can weaken the virus. It can replicate to some level, but then loses the battle to the host RNAi response,” Ding said in a press release about the most recent study.  “A virus weakened in this way can be used as a vaccine for boosting our RNAi immune system.”

In the study, his team tried the strategy against Nodamura virus by removing its B2 gene.

The team vaccinated baby and adult mice, both of which were genetically immunocompromised in that they couldn’t mount T cell or B cell defenses. In just two days, the single shot fully protected the mice against a deadly dose of virus, and the effect lasted over three months.

Viruses are most harmful to vulnerable populations—infants, the elderly, and immunocompromised individuals. Because of their weakened immune systems, current vaccines aren’t always as effective. Triggering siRNA could be a life-saving alternative strategy.

Although it works in mice, whether humans respond similarly remains to be seen. But there’s much to look forward to. The B2 “brake” protein has also been found in lots of other common viruses, including dengue, flu, and a family of viruses that causes fever, rash, and blisters.

The team is already working on a new flu vaccine, using live viruses without the B2 protein. If successful, the vaccine could potentially be made as a nasal spray—forget the needle jab. And if their siRNA theory holds up, such a vaccine might fend off the virus even as it mutates into new strains. The playbook could also be adapted to tackle new Covid variants, RSV, or whatever nature next throws at us.

This vaccine strategy is “broadly applicable to any number of viruses, broadly effective against any variant of a virus, and safe for a broad spectrum of people,” study author Dr. Rong Hai said in the press release. “This could be the universal vaccine that we have been looking for.”

Image Credit: Diana Polekhina / Unsplash

This Week’s Awesome Tech Stories From Around the Web (Through April 20)

ARTIFICIAL INTELLIGENCE

15 Graphs That Explain the State of AI in 2024
Eliza Strickland | IEEE Spectrum
“Each year, the AI Index lands on virtual desks with a louder virtual thud—this year, its 393 pages are a testament to the fact that AI is coming off a really big year in 2023. For the past three years, IEEE Spectrum has read the whole damn thing and pulled out a selection of charts that sum up the current state of AI.”

NEUROSCIENCE

The Next Frontier for Brain Implants Is Artificial Vision
Emily Mullin | Wired
“Elon Musk’s Neuralink and others are developing devices that could provide blind people with a crude sense of sight. …’This is not about getting biological vision back,’ says Philip Troyk, a professor of biomedical engineering at Illinois Tech, who’s leading the study Bussard is in. ‘This is about exploring what artificial vision could be.'”

DIGITAL MEDIA

Microsoft’s VASA-1 Can Deepfake a Person With One Photo and One Audio Track
Benj Edwards | Ars Technica
“On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.”

TECH

Meta Is Already Training a More Powerful Successor to Llama 3
Will Knight | Wired
“On Thursday morning, Meta released its latest artificial intelligence model, Llama 3, touting it as the most powerful to be made open source so that anyone can use it. The same afternoon, Yann LeCun, Meta’s chief AI scientist, said an even more powerful successor to Llama is in the works. He suggested it could potentially outshine the world’s best closed AI models, including OpenAI’s GPT-4 and Google’s Gemini.”

COMPUTING

Intel Reveals World’s Biggest ‘Brain-Inspired’ Neuromorphic Computer
Matthew Sparkes | New Scientist
“Hala Point contains 1.15 billion artificial neurons across 1152 Loihi 2 achips, and is capable of 380 trillion synaptic operations per second. Mike Davies at Intel says that despite this power it occupies just six racks in a standard server case—a space similar to that of a microwave oven. Larger machines will be possible, says Davies. ‘We built this scale of system because, honestly, a billion neurons was a nice round number,’ he says. ‘I mean, there wasn’t any particular technical engineering challenge that made us stop at this level.'”

AUTOMATION

US Air Force Confirms First Successful AI Dogfight
Emma Roth | The Verge
“Human pilots were on board the X-62A with controls to disable the AI system, but DARPA says the pilots didn’t need to use the safety switch ‘at any point.’ The X-62A went against an F-16 controlled solely by a human pilot, where both aircraft demonstrated ‘high-aspect nose-to-nose engagements’ and got as close as 2,000 feet at 1,200 miles per hour. DARPA doesn’t say which aircraft won the dogfight, however.”

CULTURE

What If Your AI Girlfriend Hated You?
Kate Knibbs | Wired
“It seems as though we’ve arrived at the moment in the AI hype cycle where no idea is too bonkers to launch. This week’s eyebrow-raising AI project is a new twist on the romantic chatbot—a mobile app called AngryGF, which offers its users the uniquely unpleasant experience of getting yelled at via messages from a fake person.”

NEUROSCIENCE

Insects and Other Animals Have Consciousness, Experts Declare
Dan Falk | Quanta
“For decades, there’s been a broad agreement among scientists that animals similar to us—the great apes, for example—have conscious experience, even if their consciousness differs from our own. In recent years, however, researchers have begun to acknowledge that consciousness may also be widespread among animals that are very different from us, including invertebrates with completely different and far simpler nervous systems.”

SCIENCE

Two Lifeforms Merge in Once-in-a-Billion-Years Evolutionary Event
Michael Irving | New Atlas
“Scientists have caught a once-in-a-billion-years evolutionary event in progress, as two lifeforms have merged into one organism that boasts abilities its peers would envy. Last time this happened, Earth got plants. …A species of algae called Braarudosphaera bigelowii was found to have engulfed a cyanobacterium that lets them do something that algae, and plants in general, can’t normally do—’fixing’ nitrogen straight from the air, and combining it with other elements to create more useful compounds.”

Image Credit: Shubham Dhage / Unsplash

Cell Therapies Now Beat Back Once Untreatable Blood Cancers. Scientists Are Making Them Even Deadlier.

0

Dubbed “living drugs,” CAR T cells are bioengineered from a patient’s own immune cells to make them better able to hunt and destroy cancer.

The treatment is successfully tackling previously untreatable blood cancers. Six therapies are already approved by the FDA. Over a thousand clinical trials are underway. These aren’t limited to cancer—they cover a range of difficult medical problems such as autoimmune diseases, heart conditions, and viral infections including HIV. They may even slow down the biological processes that contribute to aging.

But CAR T has an Achilles heel.

Once injected into the body, the cells often slowly dwindle. Called “exhaustion,” this process erodes therapeutic effect over time and has dire medical consequences. According to Dr. Evan Weber at the University of Pennsylvania, more than 50 percent of people who respond to CAR T therapies eventually relapse. This may also be why CAR T cells have struggled to fight off solid tumors in breast, pancreatic, or deadly brain cancers.

This month, two teams found a potential solution—make CAR T cells more like stem cells. Known for their regenerative abilities, stem cells easily repopulate the body. Both teams identified the same protein “master switch” to make engineered cells resemble stem cells.

One study, led by Weber, found that adding the protein, called FOXO1, revved up metabolism and health in CAR T cells in mice. Another study from a team at the Peter MacCallum Cancer Center in Australia found FOXO1-boosted cells appeared genetically similar to immune stem cells and were better able to fend off solid tumors.

While still early, “these findings may help improve the design of CAR T cell therapies and potentially benefit a wider range of patients,” said Weber in a press release.

I Remember

Here’s how CAR T cell therapy usually works.

The approach focuses on T cells, a particular type of immune cell that naturally hunts downs and eliminates infections and cancers inside the body. Enemy cells are dotted with a specific set of proteins, a kind of cellular fingerprint, that T cells recognize and latch onto.

Tumors also have a unique signature. But they can be sneaky, with some eventually developing ways to evade immune surveillance. In solid cancers, for example, they can pump out chemicals that fight off immune cell defenders, allowing the cancer to grow and spread.

CAR T cells are designed to override these barriers.

To make them, medical practitioners remove T cells from the body and genetically engineer them to produce tailormade protein hooks targeting a particular protein on tumor cells. The supercharged T cells are then grown in petri dishes and transfused back into the body.

In the beginning, CAR T was a last-resort blood cancer treatment, but now it’s a first-line therapy. Keeping the engineered cells around inside the body, however, has been a struggle. With time, the cells stop dividing and become dysfunctional, potentially allowing the cancer to relapse.

The Translator

To tackle cell exhaustion, Weber’s team found inspiration in the body itself.

Our immune system has a cellular ledger tracking previous infections. The cells making up this ledger are called memory T cells. They’re a formidable military reserve, a portion of which resemble stem cells. When the immune system detects an invader it’s seen before—a virus, bacteria, or cancer cell—these reserve cells rapidly proliferate to fend off the attack.

CAR T cells don’t usually have this ability. Inside multiple cancers, they eventually die off—allowing cancers to return. Why?

In 2012, Dr. Crystal Mackall at Stanford University found several changes in gene expression that lead to CAR T cell exhaustion. In the new study, together with Weber, the team discovered a protein, FOXO1, that could lengthen CAR T’s effects.

In one test, a drug that inhibited FOXO1 caused CAR T cells to rapidly fail and eventually die in petri dishes. Erasing genes encoding FOXO1 also hindered the cells and increased signs of CAR T exhaustion. When infused into mice with leukemia, CAR T cells without FOXO1 couldn’t treat the cancer. By contrast, increasing levels of FOXO1 helped the cells readily fight it off.

Analyzing genes related to FOXO1, the team found they were mostly connected to immune cell memory. It’s likely that adding the gene encoding FOXO1 to CAR T cells promotes a stable memory for the cells, so they can easily recognize potential harm—be it cancer or pathogen—long after the initial infection.

When treating mice with leukemia, a single dose of the FOXO1-enhanced cells decreased cancer growth and increased survival up to five-fold compared to standard CAR T therapy. The enhanced treatment also tackled a type of bone cancer in mice, which is often hard to treat without surgery and chemotherapy.

An Immune Link

Meanwhile, the Australian team also zeroed in on FOXO1. Led by Drs. Junyun Lai, Paul Beavis, and Phillip Darcy, the team was looking for protein candidates to enhance CAR T longevity.

The idea was, like their natural counterparts, engineered CAR T cells also need a healthy metabolism to thrive and divide.

They started by analyzing a protein previously shown to enhance CAR T metabolism, potentially lowering the chances of exhaustion. Mapping the epigenome and transcriptome in CAR T cells—both of which tell us how genes are expressed—they also discovered FOXO1 regulating CAR T cell longevity.

As a proof of concept, the team induced exhaustion in the engineered cells by increasingly restricting their ability to divide.

In mice with cancer, cells supercharged with FOXO1 lasted longer by months than those that hadn’t been boosted. The critters’ liver and kidney functions remained normal, and they didn’t lose weight during the treatment, a marker of overall health. The FOXO1 boost also changed how genes were expressed in the cells—they looked younger, as if in a stem cell-like state.

The new recipe also worked in T cells donated by six people with cancer who had undergone standard CAR T therapy. Adding a dose of FOXO1 to these cells increased their metabolism.

Multiple CAR T clinical trials are ongoing. But “the effects of such cells are transient and do not provide long-term protection against exhaustion,” wrote Darcy and team. In other words, durability is key for CAR T cells to live up to their full potential.

A FOXO1 boost offers a way—although it may not be the only way.

“By studying factors that drive memory in T cells, like FOXO1, we can enhance our understanding of why CAR T cells persist and work more effectively in some patients compared to others,” said Weber.

Image Credit: Gerardo Sotillo, Stanford Medicine

Scientists Create Atomically Thin Gold With Century-Old Japanese Knife Making Technique

0

Graphene has been hailed as a wonder material, but it also set off a rush to find other promising atomically thin materials. Now researchers have managed to create a 2D version of gold they call “goldene,” which could have a host of applications in chemistry.

Scientists had speculated about the possibility of creating layers of carbon just a single atom thick for many decades. But it wasn’t until 2004 that a team from the University of Manchester in the UK first produced graphene sheets using the remarkably simple technique of peeling them off a lump of graphite with common sticky tape.

The resulting material’s high strength, high conductivity, and unusual optical properties set off a stampede to find applications for it. But it also spurred researchers to investigate what kinds of exotic capabilities other ultra-thin materials could have.

Gold is one material scientists have long been eager to make as thin as graphene, but so far, efforts have been in vain. Now though, researchers from Linköping University in Sweden have borrowed from an old Japanese forging technique to create ultra-thin flakes of what they’re calling “goldene.”

“If you make a material extremely thin, something extraordinary happens,” Shun Kashiwaya, who led the research, said in a press release. “The same thing happens with gold.”

Making goldene has proven tough in the past because its atoms tend to clump together. So, even if you can create a 2D sheet of gold atoms they quickly roll up to create nanoparticles instead.

The researchers got around this by taking a ceramic called titanium silicon carbide, which features ultra-thin layers of silicon between layers of titanium carbide, and coating it with gold. They then heated it in a furnace, which caused the gold to diffuse into the material and replace the silicon layers in a process known as intercalation.

This created atomically thin layers of gold embedded in the ceramic. To get it out, they had to borrow a century-old technique developed by Japanese knife makers. They used a chemical formulation known as Murakami’s reagent, which etches away carbon residue, to slowly reveal the gold sheets.

The researchers had to experiment with different concentrations of the reagent and various etching times. They also had to add a detergent-like chemical called a surfactant that protected the gold sheets from the etching liquid and prevented them from curling up. The gold flakes could then be sieved out of the solution to be examined more closely.

In a paper in Nature Synthesis, the researchers describe how they used an electron microscope to confirm that the gold layers were indeed just one atom thick. They also showed that the goldene flakes were semiconductors.

It’s not the first time someone has claimed to have created goldene, notes Nature. But previous attempts have involved creating the ultra-thin sheets sandwiched between other materials, and the Linköping team say their effort is the first to create a “free-standing 2D metal.”

The material could have a range of use cases, the researchers say. Gold nanoparticles already show promise as catalysts that can turn plastic waste and biomass into valuable materials, they note in their paper, and they have properties that could prove useful for energy harvesting, creating photonic devices, or even splitting water to create hydrogen fuel.

It will take work to tweak the synthesis method so it can produce commercially useful amounts of the material, a challenge that has delayed the full arrival of graphene as a widely used product too. But the team is also investigating whether similar approaches can be applied to other useful catalytic metals. Graphene might not be the only wonder material in town for long.

Image Credit: Nature Synthesis (CC BY 4.0)

Boston Dynamics Says Farewell to Its Humanoid Atlas Robot—Then Brings It Back Fully Electric

0

Yesterday, Boston Dynamics announced it was retiring its hydraulic Atlas robot. Atlas has long been the standard bearer of advanced humanoid robots. Over the years, the company was known as much for its research robots as it was for slick viral videos of them working out in military fatigues, forming dance mobs, and doing parkour. Fittingly, the company put together a send-off video of Atlas’s greatest hits and blunders.

But there were clues this wasn’t really the end, not least of which was the specific inclusion of the word “hydraulic” and the last line of the video, “‘Til we meet again, Atlas.” It wasn’t a long hiatus. Today, the company released hydraulic Atlas’s successor—electric Atlas.

The new Atlas is notable for several reasons. First, and most obviously, Boston Dynamics has finally done away with hydraulic actuators in favor of electric motors. To be clear, Atlas has long had an onboard battery pack—but now it’s fully electric. The advantages of going electric include less cost, noise, weight, and complexity. It also allows for a more polished design. From the company’s own Spot robot to a host of other humanoid robots, fully electric models are the norm these days. So, it’s about time Atlas made the switch.

Without a mess of hydraulic hoses to contend with, the new Atlas can now also contort itself in new ways. As you’ll note in the release video, the robot rises to its feet—a crucial skill for a walking robot—in a very, let’s say, special way. It folds its legs up along its torso and impossibly, for a human at least, pivots up through its waist (no hands). Once standing Atlas swivels its head 180 degrees, then does the same thing at each hip joint and the waist. It takes a few watches to really appreciate all the weirdness there.

The takeaway is that while Atlas looks like us, it’s capable of movements we aren’t and therefore has more flexibility in how it completes future tasks.

This theme of same-but-different is evident in its head too. Instead of opting for a human-like head that risks slipping into the uncanny valley, the team chose a featureless (for now) lighted circle. In an interview with IEEE Spectrum, Boston Dynamics CEO, Robert Playter, said the human-like designs they tried seemed “a little bit threatening or dystopian.”

“We’re trying to project something else: a friendly place to look to gain some understanding about the intent of the robot,” he said. “The design borrows from some friendly shapes that we’d seen in the past. For example, there’s the old Pixar lamp that everybody fell in love with decades ago, and that informed some of the design for us.”

While most of these upgrades are improvements, there is one area where it’s not totally clear how well the new form will fair: strength and power.

Hydraulics are known to provide both, and Atlas pushed its hydraulics to their limits carrying heavy objects, executing backflips, and doing 180-degree, in-air twists. According to the press release and Playter’s interviews, little has been lost in this category. In fact, they say, electric Atlas is stronger than hydraulic Atlas. Still, as with all things robotics, the ultimate proof of how capable it is will likely be in video form, which we’ll eagerly await.

Despite big design updates, the company’s messaging is perhaps more notable. Atlas used to be a research robot. Now, the company intends to sell them commercially.

This isn’t terribly surprising. There are now a number of companies competing in the humanoid robots space, including Agility, 1X, Tesla, Apptronik, and Figure—which just raised $675 million at a $2.6 billion valuation. Several are making rapid progress, with a heavy focus on AI, and have kicked off real-world pilots.

Where does Boston Dynamics fit in? With Atlas, the company has been the clear leader for years. So, it’s not starting from the ground floor. Also, thanks to its Spot and Stretch robots, the company already has experience commercializing and selling advanced robots, from identifying product-market fit to dealing with logistics and servicing. But AI was, until recently, less of a focus. Now, they’re folding reinforcement learning into Spot, have begun experimenting with generative AI too, and promise more is coming.

Hyundai acquired Boston Dynamics for $1.1 billion in 2021. This may prove advantageous, as they have access to a world-class manufacturing company along with its resources and expertise producing and selling machines at scale. It’s also an opportunity to pilot Atlas in real-world situations and perfect it for future customers. Plans are already in motion to put Atlas to work at Hyundai next year.

Still, it’s worth noting that, although humanoid robots are attracting attention, getting big time investment, and being tried out in commercial contexts, there’s likely a ways to go before they reach the kind of generality some companies are touting. Playter says Boston Dynamics is going for multi-purpose, but still niche, robots in the near term.

“It definitely needs to be a multi-use case robot. I believe that because I don’t think there’s very many examples where a single repetitive task is going to warrant these complex robots,” he said. “I also think, though, that the practical matter is that you’re going to have to focus on a class of use cases, and really making them useful for the end customer.”

Humanoid robots that tidy your house and do the dishes may not be imminent, but the field is hot, and AI is bringing a degree of generality not possible a year ago. Now that Boston Dynamics has thrown its name in the hat, things will only get more interesting from here. We’ll be keeping a close eye on YouTube to see what new tricks Atlas has up its sleeve.

Image Credit: Boston Dynamics

Exploding Stars Are Rare—but if One Was Close Enough, It Could Threaten Life on Earth

0

Stars like the sun are remarkably constant. They vary in brightness by only 0.1 percent over years and decades, thanks to the fusion of hydrogen into helium that powers them. This process will keep the sun shining steadily for about 5 billion more years, but when stars exhaust their nuclear fuel, their deaths can lead to pyrotechnics.

The sun will eventually die by growing large and then condensing into a type of star called a white dwarf. But stars over eight times more massive than the sun die violently in an explosion called a supernova.

Supernovae happen across the Milky Way only a few times a century, and these violent explosions are usually remote enough that people here on Earth don’t notice. For a dying star to have any effect on life on our planet, it would have to go supernova within 100 light years from Earth.

I’m an astronomer who studies cosmology and black holes.

In my writing about cosmic endings, I’ve described the threat posed by stellar cataclysms such as supernovae and related phenomena such as gamma-ray bursts. Most of these cataclysms are remote, but when they occur closer to home they can pose a threat to life on Earth.

The Death of a Massive Star

Very few stars are massive enough to die in a supernova. But when one does, it briefly rivals the brightness of billions of stars. At one supernova per 50 years, and with 100 billion galaxies in the universe, somewhere in the universe a supernova explodes every hundredth of a second.

The dying star emits high-energy radiation as gamma rays. Gamma rays are a form of electromagnetic radiation with wavelengths much shorter than light waves, meaning they’re invisible to the human eye. The dying star also releases a torrent of high-energy particles in the form of cosmic rays: subatomic particles moving at close to the speed of light.

Supernovae in the Milky Way are rare, but a few have been close enough to Earth that historical records discuss them. In 185 AD, a star appeared in a place where no star had previously been seen. It was probably a supernova.

Observers around the world saw a bright star suddenly appear in 1006 AD. Astronomers later matched it to a supernova 7,200 light years away. Then, in 1054 AD, Chinese astronomers recorded a star visible in the daytime sky that astronomers subsequently identified as a supernova 6,500 light years away.

A man with dark hair and a beard, wearing dark clothes with an elaborate collar, resting one hand on his hip and another on a globe.
Johannes Kepler, the astronomer who observed what was likely a supernova in 1604. Image Credit: Kepler-Museum in Weil der Stadt

Johannes Kepler observed the last supernova in the Milky Way in 1604, so in a statistical sense, the next one is overdue.

At 600 light years away, the red supergiant Betelgeuse in the constellation of Orion is the nearest massive star getting close to the end of its life. When it goes supernova, it will shine as bright as the full moon for those watching from Earth, without causing any damage to life on our planet.

Radiation Damage

If a star goes supernova close enough to Earth, the gamma-ray radiation could damage some of the planetary protection that allows life to thrive on Earth. There’s a time delay due to the finite speed of light. If a supernova goes off 100 light years away, it takes 100 years for us to see it.

Astronomers have found evidence of a supernova 300 light years away that exploded 2.5 million years ago. Radioactive atoms trapped in seafloor sediments are the telltale signs of this event. Radiation from gamma rays eroded the ozone layer, which protects life on Earth from the sun’s harmful radiation. This event would have cooled the climate, leading to the extinction of some ancient species.

Safety from a supernova comes with greater distance. Gamma rays and cosmic rays spread out in all directions once emitted from a supernova, so the fraction that reach the Earth decreases with greater distance. For example, imagine two identical supernovae, with one 10 times closer to Earth than the other. Earth would receive radiation that’s about a hundred times stronger from the closer event.

A supernova within 30 light years would be catastrophic, severely depleting the ozone layer, disrupting the marine food chain and likely causing mass extinction. Some astronomers guess that nearby supernovae triggered a series of mass extinctions 360 to 375 million years ago. Luckily, these events happen within 30 light years only every few hundred million years.

When Neutron Stars Collide

But supernovae aren’t the only events that emit gamma rays. Neutron star collisions cause high-energy phenomena ranging from gamma rays to gravitational waves.

Left behind after a supernova explosion, neutron stars are city-size balls of matter with the density of an atomic nucleus, so 300 trillion times denser than the sun. These collisions created many of the gold and precious metals on Earth. The intense pressure caused by two ultradense objects colliding forces neutrons into atomic nuclei, which creates heavier elements such as gold and platinum.

A neutron star collision generates an intense burst of gamma rays. These gamma rays are concentrated into a narrow jet of radiation that packs a big punch.

If the Earth were in the line of fire of a gamma-ray burst within 10,000 light years, or 10 percent of the diameter of the galaxy, the burst would severely damage the ozone layer. It would also damage the DNA inside organisms’ cells, at a level that would kill many simple life forms like bacteria.

That sounds ominous, but neutron stars do not typically form in pairs, so there is only one collision in the Milky Way about every 10,000 years. They are 100 times rarer than supernova explosions. Across the entire universe, there is a neutron star collision every few minutes.

Gamma-ray bursts may not hold an imminent threat to life on Earth, but over very long time scales, bursts will inevitably hit the Earth. The odds of a gamma-ray burst triggering a mass extinction are 50 percent in the past 500 million years and 90 percent in the 4 billion years since there has been life on Earth.

By that math, it’s quite likely that a gamma-ray burst caused one of the five mass extinctions in the past 500 million years. Astronomers have argued that a gamma-ray burst caused the first mass extinction 440 million years ago, when 60 percent of all marine creatures disappeared.

A Recent Reminder

The most extreme astrophysical events have a long reach. Astronomers were reminded of this in October 2022, when a pulse of radiation swept through the solar system and overloaded all of the gamma-ray telescopes in space.

It was the brightest gamma-ray burst to occur since human civilization began. The radiation caused a sudden disturbance to the Earth’s ionosphere, even though the source was an explosion nearly two billion light years away. Life on Earth was unaffected, but the fact that it altered the ionosphere is sobering—a similar burst in the Milky Way would be a million times brighter.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA, ESA, Joel Kastner (RIT)

A New Photonic Computer Chip Uses Light to Slash AI Energy Costs

0

AI models are power hogs.

As the algorithms grow and become more complex, they’re increasingly taxing current computer chips. Multiple companies have designed chips tailored to AI to reduce power draw. But they’re all based on one fundamental rule—they use electricity.

This month, a team from Tsinghua University in China switched up the recipe. They built a neural network chip that uses light rather than electricity to run AI tasks at a fraction of the energy cost of NVIDIA’s H100, a state-of-the-art chip used to train and run AI models.

Called Taichi, the chip combines two types of light-based processing into its internal structure. Compared to previous optical chips, Taichi is far more accurate for relatively simple tasks such as recognizing hand-written numbers or other images. Unlike its predecessors, the chip can generate content too. It can make basic images in a style based on the Dutch artist Vincent van Gogh, for example, or classical musical numbers inspired by Johann Sebastian Bach.

Part of Taichi’s efficiency is due to its structure. The chip is made of multiple components called chiplets. Similar to the brain’s organization, each chiplet performs its own calculations in parallel, the results of which are then integrated with the others to reach a solution.

Faced with a challenging problem of separating images over 1,000 categories, Taichi was successful nearly 92 percent of the time, matching current chip performance, but slashing energy consumption over a thousand-fold.

For AI, “the trend of dealing with more advanced tasks [is] irreversible,” wrote the authors. “Taichi paves the way for large-scale photonic [light-based] computing,” leading to more flexible AI with lower energy costs.

Chip on the Shoulder

Today’s computer chips don’t mesh well with AI.

Part of the problem is structural. Processing and memory on traditional chips are physically separated. Shuttling data between them takes up enormous amounts of energy and time.

While efficient for solving relatively simple problems, the setup is incredibly power hungry when it comes to complex AI, like the large language models powering ChatGPT.

The main problem is how computer chips are built. Each calculation relies on transistors, which switch on or off to represent the 0s and 1s used in calculations. Engineers have dramatically shrunk transistors over the decades so they can cram ever more onto chips. But current chip technology is cruising towards a breaking point where we can’t go smaller.

Scientists have long sought to revamp current chips. One strategy inspired by the brain relies on “synapses”—the biological “dock” connecting neurons—that compute and store information at the same location. These brain-inspired, or neuromorphic, chips slash energy consumption and speed up calculations. But like current chips, they rely on electricity.

Another idea is to use a different computing mechanism altogether: light. “Photonic computing” is “attracting ever-growing attention,” wrote the authors. Rather than using electricity, it may be possible to hijack light particles to power AI at the speed of light.

Let There Be Light

Compared to electricity-based chips, light uses far less power and can simultaneously tackle multiple calculations. Tapping into these properties, scientists have built optical neural networks that use photons—particles of light—for AI chips, instead of electricity.

These chips can work two ways. In one, chips scatter light signals into engineered channels that eventually combine the rays to solve a problem. Called diffraction, these optical neural networks pack artificial neurons closely together and minimize energy costs. But they can’t be easily changed, meaning they can only work on a single, simple problem.

A different setup depends on another property of light called interference. Like ocean waves, light waves combine and cancel each other out. When inside micro-tunnels on a chip, they can collide to boost or inhibit each other—these interference patterns can be used for calculations. Chips based on interference can be easily reconfigured using a device called an interferometer. Problem is, they’re physically bulky and consume tons of energy.

Then there’s the problem of accuracy. Even in the sculpted channels often used for interference experiments, light bounces and scatters, making calculations unreliable. For a single optical neural network, the errors are tolerable. But with larger optical networks and more sophisticated problems, noise rises exponentially and becomes untenable.

This is why light-based neural networks can’t be easily scaled up. So far, they’ve only been able to solve basic tasks, such as recognizing numbers or vowels.

“Magnifying the scale of existing architectures would not proportionally improve the performances,” wrote the team.

Double Trouble

The new AI, Taichi, combined the two traits to push optical neural networks towards real-world use.

Rather than configuring a single neural network, the team used a chiplet method, which delegated different parts of a task to multiple functional blocks. Each block had its own strengths: One was set up to analyze diffraction, which could compress large amounts of data in a short period of time. Another block was embedded with interferometers to provide interference, allowing the chip to be easily reconfigured between tasks.

Compared to deep learning, Taichi took a “shallow” approach whereby the task is spread across multiple chiplets.

With standard deep learning structures, errors tend to accumulate over layers and time. This setup nips problems that come from sequential processing in the bud. When faced with a problem, Taichi distributes the workload across multiple independent clusters, making it easier to tackle larger problems with minimal errors.

The strategy paid off.

Taichi has the computational capacity of 4,256 total artificial neurons, with nearly 14 million parameters mimicking the brain connections that encode learning and memory. When sorting images into 1,000 categories, the photonic chip was nearly 92 percent accurate, comparable to “currently popular electronic neural networks,” wrote the team.

The chip also excelled in other standard AI image-recognition tests, such as identifying hand-written characters from different alphabets.

As a final test, the team challenged the photonic AI to grasp and recreate content in the style of different artists and musicians. When trained with Bach’s repertoire, the AI eventually learned the pitch and overall style of the musician. Similarly, images from van Gogh or Edvard Munch—the artist behind the famous painting, The Scream—fed into the AI allowed it to generate images in a similar style, although many looked like a toddler’s recreation.

Optical neural networks still have much further to go. But if used broadly, they could be a more energy-efficient alternative to current AI systems. Taichi is over 100 times more energy efficient than previous iterations. But the chip still requires lasers for power and data transfer units, which are hard to condense.

Next, the team is hoping to integrate readily available mini lasers and other components into a single, cohesive photonic chip. Meanwhile, they hope Taichi will “accelerate the development of more powerful optical solutions” that could eventually lead to “a new era” of powerful and energy-efficient AI.

Image Credit: spainter_vfx / Shutterstock.com

This Week’s Awesome Tech Stories From Around the Web (Through April 13)

ROBOTICS

Is Robotics About to Have Its Own ChatGPT Moment?
Melissa Heikkilä | MIT Technology Review
“For decades, roboticists have more or less focused on controlling robots’ ‘bodies’—their arms, legs, levers, wheels, and the like—via purpose-driven software. But a new generation of scientists and inventors believes that the previously missing ingredient of AI can give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into our homes.”

ARTIFICIAL INTELLIGENCE

Humans Forget. AI Assistants Will Remember Everything
Boone Ashworth | Wired
“Human brains, Gruber says, are really good at story retrieval, but not great at remembering details, like specific dates, names, or faces. He has been arguing for digital AI assistants that can analyze everything you do on your devices and index all those details for later reference.”

BIOTECH

The Effort to Make a Breakthrough Cancer Therapy Cheaper
Cassandra Willyard | MIT Technology Review
“CAR-T therapies are already showing promise beyond blood cancers. Earlier this year, researchers reported stunning results in 15 patients with lupus and other autoimmune diseases. CAR-T is also being tested as a treatment for solid tumors, heart disease, aging, HIV infection, and more. As the number of people eligible for CAR-T therapies increases, so will the pressure to reduce the cost.”

ETHICS

Students Are Likely Writing Millions of Papers With AI
Amanda Hoover | Wired
“A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing.”

SCIENCE

Physicists Capture First-Ever Image of an Electron Crystal
Isaac Schultz | Gizmodo
“Electrons are typically seen flitting around their atoms, but a team of physicists has now imaged the particles in a very different state: nestled together in a quantum phase called a Wigner crystal, without a nucleus at their core. The phase is named after Eugene Wigner, who predicted in 1934 that electrons would crystallize in a lattice when certain interactions between them are strong enough. The recent team used high-resolution scanning tunneling microscopy to directly image the predicted crystal.”

GADGETS

Review: Humane Ai Pin
Julian Chokkattu | Wired
“Humane has potential with the Ai Pin. I like being able to access an assistant so quickly, but right now, there’s nothing here that makes me want to use it over my smartphone. Humane says this is just version 1.0 and that many of the missing features I’ve mentioned will arrive later. I’ll be happy to give it another go then.”

SPACE

The Moon Likely Turned Itself Inside Out 4.2 Billion Years Ago
Passant Rabie | Gizmodo
“A team of researchers from the University of Arizona found new evidence that supports one of the wildest formation theories for the moon, which suggests that Earth’s natural satellite may have turned itself inside out a few million years after it came to be. In a new study published Monday in Nature Geoscience, the researchers looked at subtle variations in the moon’s gravitational field to provide the first physical evidence of a sinking mineral-rich layer.”

TECH

How Tech Giants Cut Corners to Harvest Data for AI
Cade Metz, Cecilia Kang, Sheera Frenkel, Stuart A. Thompson, and ade | The New York Times
“The race to lead AI has become a desperate hunt for the digital data needed to advance the technology. To obtain that data, tech companies including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, according to an examination by The New York Times.”

ENERGY

Artificial Intelligence’s ‘Insatiable’ Energy Needs Not Sustainable, Arm CEO Says
Peter Landers | The Wall Street Journal
“In a January report, the International Energy Agency said a request to ChatGPT requires 2.9 watt-hours of electricity on average—equivalent to turning on a 60-watt lightbulb for just under three minutes. That is nearly 10 times as much as the average Google search. The agency said power demand by the AI industry is expected to grow by at least 10 times between 2023 and 2026.”

FUTURE

Someday, Earth Will Have a Final Total Solar Eclipse
Katherine Kornei | The New York Times
“The total solar eclipse visible on Monday over parts of Mexico, the United States and Canada was a perfect confluence of the sun and the moon in the sky. But it’s also the kind of event that comes with an expiration date: At some point in the distant future, Earth will experience its last total solar eclipse. That’s because the moon is drifting away from Earth, so our nearest celestial neighbor will one day, millions or even billions of years in the future, appear too small in the sky to completely obscure the sun.”
archive page

Image Credit: Tim Foster / Unsplash

Elon Musk Doubles Down on Mars Dreams and Details What’s Next for SpaceX’s Starship

0

Elon Musk has long been open about his dreams of using SpaceX to spread humanity’s presence further into the solar system. And last weekend, he gave an updated outline of his vision for how the company’s rockets could enable the colonization of Mars.

The serial entrepreneur has been clear for a number of years that the main motivation for founding SpaceX was to make humans a multiplanetary species. For a long time, that seemed like the kind of aspirational goal one might set to inspire and motivate engineers rather than one with a realistic chance of coming to fruition.

But following the successful launch of the company’s mammoth Starship vehicle last month, the idea is beginning to look less far-fetched. And in a speech at the company’s facilities in South Texas, Musk explained how he envisions using Starship to deliver millions of tons of cargo to Mars over the next couple of decades to create a self-sustaining civilization.

“Starship is the first design of a rocket that is actually capable of making life multiplanetary,” Musk said. “No rocket before this has had the potential to extend life to another planet.”

In a slightly rambling opening to the speech, Musk explained that making humans multiplanetary could be an essential insurance policy in case anything catastrophic happens to Earth. The red planet is the most obvious choice, he said, as it’s neither too close nor too far from Earth and has many of the raw ingredients required to support a functioning settlement.

But he estimates it will require us to deliver several million tons of cargo to the surface to get that civilization up and running. Starship is central to those plans, and Musk outlined the company’s roadmap for the massive rocket over the coming years.

Key to the vision is making the vehicle entirely reusable. That means the first hurdle is proving SpaceX can land and reuse both the Super Heavy first stage rocket and the Starship spacecraft itself. The second of those challenges will be tougher, as the vehicle must survive reentry to the atmosphere—in the most recent test, it broke up on its way back to Earth.

Musk says they plan to demonstrate the ability to land and reuse the Super Heavy booster this year, which he thinks has an 80 to 90 percent chance of success. Assuming they can get Starship to survive the extreme heat of reentry, they are also going to attempt landing the vehicle on a mock launch pad out at sea in 2024, with the aim of being able to land and reuse it by next year.

Proving the rocket works and is reusable is just the very first step in Musk’s Mars ambitions though. To achieve his goal of delivering a million people to the red planet in the next 20 years, SpaceX will have to massively ramp up its production and launch capabilities.

The company is currently building a second launch tower at its base in South Texas and is also planning to build two more at Cape Canaveral in Florida. Musk said the Texas sites would be mostly used for test launches and development work, with the Florida ones being the main hub for launches once Starship begins commercial operations.

SpaceX plans to build six Starships this year, according to Musk, but it is also building what he called a “giant factory” that will enable it to massively ramp up production of the spacecraft. The long-term goal is to produce multiple Starships a day. That’s crucial, according to Musk, because Starships initially won’t return from Mars and will instead be used as raw materials to construct structures on the surface.

The company also plans to continue development of Starship, boosting its carrying capacity from around 100 tons today to 200 tons in the future and enabling it to complete multiple launches in a day. SpaceX also hopes to demonstrate ship-to-ship refueling in orbit next year. It will be necessary to replenish the fuel used up by Starship on launch so it has a full tank as it sets off for Mars.

Those missions will depart when the orbits of Earth and Mars bring them close together, an alignment that only happens every 26 months. As such, Musk envisions entire armadas of Starships setting off together whenever these windows arrive.

SpaceX has done some early work on what needs to happen once Starships arrive at the red planet. They’ve identified promising landing sites and the infrastructure that will need setting up, including power generation, ice-mining facilities, propellant factories, and communication networks. But Musk admits they’ve yet to start development of any of these.

One glaring omission in the talk was any detail on who’s going to be paying for all of this. While the goal of making humankind multiplanetary is a noble one, it’s far from clear how the endeavor would make money for those who put up the funds to make it possible.

Musk estimates that the cost of each launch could eventually fall to just $2 to $3 million. And he noted that profits from the company’s Starlink satellites and Falcon 9 launch vehicle are currently paying for Starship’s development. But those revenue streams are unlikely to cover the thousands of launches a year required to make his Mars dreams a reality.

Still, the very fact that the questions these days are more about economics than technical feasibility is testament to the rapid progress SpaceX has made. The dream of becoming a multiplanetary species may not be science fiction for much longer.

Image Credit: SpaceX

This Company Is Growing Mini Livers Inside People to Fight Liver Disease

0

Growing a substitute liver inside a human body sounds like science fiction.

Yet a patient with severe liver damage just received an injection that could grow an additional “mini liver” directly inside their body. If all goes well, it’ll take up the failing liver’s job of filtering toxins from the blood.

For people with end-stage liver disease, a transplant is the only solution. But matching donor organs are hard to come by. Across the globe, two million people die from liver failure each year.

The new treatment, helmed by biotechnology company LyGenesis, offers an unusual solution. Rather than transplanting a whole new liver, the team is injecting healthy donor liver cells into lymph nodes in the patient’s upper abdomen. In a few months, it’s hoped the cells will gradually replicate and grow into a functional miniature liver.

The patient is part of a Phase 2a clinical trial, a stage that begins to gauge whether a therapy is effective. In up to 12 people with end-stage liver disease, the trial will test multiple doses to find the “Goldilocks” zone of treatment—effective with minimal side effects.

If successful, the therapy could sidestep the transplant organ shortage problem, not just for liver disease, but potentially also for kidney failure or diabetes. The math also works in favor of patients. Instead of one donor organ per recipient, healthy cells from one person could help multiple people in need of new organs.

A Living Bioreactor

Most of us don’t think about lymph nodes until we catch a cold, and they swell up painfully under the chin. These structures are dotted throughout the body. Like tiny cellular nurseries, they help immune cells proliferate to fend off invading viruses and bacteria.

They also have a dark side. Lymph nodes aid the spread of breast and other types of cancers. Because they’re highly connected to a highway of lymphatic vessels, cancer cells tunnel into them and take advantage of nutrients in the blood to grow and spread across the body.

What seems like a biological downfall may benefit regenerative medicine. If lymph nodes can support both immune cells and cancer growth, they may also be able to incubate other cell types and grow them into tissues—or even replacement organs.

The idea diverges from usual regenerative therapies, such as stem cell treatments, which aim to revive damaged tissues at the spot of injury. This is a hard ask: When organs fail, they often scar and spew out toxic chemicals that prevent engrafted cells from growing.

Lymph nodes offer a way to skip these cellular cesspools entirely.

Growing organs inside lymph nodes may sound far-fetched, but over a decade ago, LyGenesis’ chief scientific officer and co-founder, Dr. Eric Lagasse, showed it was possible in mice. In one test, his team injected liver cells directly into a lymph node inside a mouse’s belly. They found the grafted cells stayed in the “nursery,” rather than roaming the body and causing unexpected side effects.

In a mouse model of lethal liver failure, an infusion of healthy liver cells into the lymph node grew into a mini liver in just twelve weeks. The transplanted cells took over their host, developing into cube-like cells characteristic of normal liver cells and leaving behind just a sliver of normal lymph node cells.

The graft could support immune system growth and grew cells to shuttle bile and other digestive chemicals. It also boosted the mice’s average survival rate. Without treatment, most mice died within 10 weeks of the start of the study. Most mice injected with liver cells survived past 30 weeks.

A similar strategy worked in dogs and pigs with damaged livers. Injecting donor cells into lymph nodes formed mini livers in less than two months in pigs. Under the microscope, the newborn structures resembled the liver’s intricate architecture, including “highways” for bile to easily flow along instead of accumulating, which causes even more damage and scarring.

The body has over 500 hundred lymph nodes. Injecting into other lymph nodes located elsewhere also grew mini livers, but they weren’t as effective.

“It’s all about location, location, location,” said Lagasse at the time.

A Daring Trial

With prior experience guiding their clinical trial, LyGenesis dosed a first patient in late March.

The team used a technique called endoscopic ultrasound to direct the cells into the designated lymph node. In the procedure, a thin, flexible tube with a small ultrasound device is inserted through the mouth into the digestive track. The ultrasound generates an image of the surrounding tissue and helps guide the tube to the target lymph node for injection.

The procedure may sound difficult, but compared to a liver transplant, it’s minimally invasive. In an interview with Nature, Dr. Michael Hufford, CEO of LyGenesis, said the patient is recovering well and already discharged from the clinic.

The company aims to enroll all 12 patients by mid 2025 to test the therapy’s safety and efficacy.

Many questions remain. The transplanted cells could grow into mini livers of different sizes, based on chemical signals from the body. Although not a problem in mice and pigs, could they potentially overgrow in humans? Meanwhile, patients receiving the treatment will need to take a hefty dose of medications to suppress their immune systems. How these will interact with the transplants is also unknown.

Another question is dosage. Lymph nodes are plentiful. The trial will inject liver cells into up to five lymph nodes to see if multiple mini livers can grow and function without side effects.

If successful, the therapy has wider reach.

In diabetic mice, seeding lymph nodes with pancreatic cellular clusters restored their blood sugar levels. A similar strategy could combat Type 1 diabetes in humans. The company is also looking into whether the technology can revive kidney function or even combat aging.

But for now, Hufford is focused on helping millions of people with liver damage. “This therapy will potentially be a remarkable regenerative medicine milestone by helping patients with ESLD [end-stage liver disease] grow new functional ectopic livers in their own body,” he said.

Image Credit: A solution with liver cells in suspension / LyGenesis

Harvard’s New Programmable Liquid Shifts Its Properties on Demand

0

We’re surrounded by ingenious substances: a menu of metal alloys that can wrap up leftovers or skin rockets, paints in any color imaginable, and ever-morphing digital displays. Virtually all of these exploit the natural properties of the underlying materials.

But an emerging class of materials is more versatile, even programmable.

Known as metamaterials, these substances are meticulously engineered such that their structural makeup—as opposed to their composition—determines their properties. Some metamaterials might make long-distance wireless power transfer practical, others could bring “invisibility cloaks” or futuristic materials that respond to brainwaves.

But most examples are solid metamaterials—a Harvard team wondered if they could make a metafluid. As it turns out, yes, absolutely. The team recently described their results in Nature.

“Unlike solid metamaterials, metafluids have the unique ability to flow and adapt to the shape of their container,” Katia Bertoldi, a professor in applied mechanics at Harvard and senior author of the paper, said in a press release. “Our goal was to create a metafluid that not only possesses these remarkable attributes but also provides a platform for programmable viscosity, compressibility, and optical properties.”

The team’s metafluid is made up of hundreds of thousands of tiny, stretchy spheres—each between 50 to 500 microns across—suspended in oil. The spheres change shape depending on the pressure of the surrounding oil. At higher pressures, they deform, one hemisphere collapsing inward into a kind of half moon shape. They then resume their original spherical shape when the pressure is relieved.

The metafluid’s properties—such as viscosity or opacity—change depending on which of these shapes its constituent spheres assume. The fluid’s properties can be fine-tuned based on how many spheres are in the liquid and how big or thick they are.

Greater pressure causes the spheres to collapse. When the pressure is relieved, they resume their spherical shape. Credit: Adel Djellouli/Harvard SEAS

As a proof of concept, the team filled a hydraulic robotic gripper with their metafluid. Robots usually have to be programmed to sense objects and adjust grip strength. The team showed the gripper could automatically adapt to a blueberry, a glass, and an egg without additional sensing or programming required. The pressure of each object “programmed” the liquid to adjust, allowing the gripper to pick up all three, undamaged, with ease.

The team also showed the metafluid could switch from opaque, when its constituents were spherical, to more transparent, when they collapsed. The latter shape, the researchers said, functions like a lens focusing light, while the former scatters light.

The metafluid obscures the Harvard logo then becomes more transparent as the capsules collapse. Credit: Adel Djellouli/Harvard SEAS

Also of note, the metafluid behaves like a Newtonian fluid when its components are spherical, meaning its viscosity only changes with shifts in temperature. When they collapse, however, it becomes a non-Newtonian fluid, where its viscosity changes depending on the shear forces present. The greater the shear force—that is, parallel forces pushing in opposite directions—the more liquid the metafluid becomes.

Next, the team will investigate additional properties—such as how their creation’s acoustics and thermodynamics change with pressure—and look into commercialization. Making the elastic spheres themselves is fairly straightforward, and they think metafluids like theirs might be useful in robots, as “intelligent” shock absorbers, or in color-changing e-inks.

“The application space for these scalable, easy-to-produce metafluids is huge,” said Bertoldi.

Of course, the team’s creation is still in the research phase. There are a plenty of hoops yet to navigate before it shows up in products we all might enjoy. Still, the work adds to a growing list of metamaterials—and shows the promise of going from solid to liquid.

Image Credit: Adel Djellouli/Harvard SEAS

3 Body Problem: Is the Universe Really a ‘Dark Forest’ Full of Hostile Aliens in Hiding?

0

We have no good reason to believe that aliens have ever contacted Earth. Sure, there are conspiracy theories, and some rather strange reports about harm to cattle, but nothing credible. Physicist Enrico Fermi found this odd. His formulation of the puzzle, proposed in the 1950s and now known as the Fermi Paradox, is still key to the search for extraterrestrial life (SETI) and messaging by sending signals into space (METI).

The Earth is about 4.5 billion years old, and life is at least 3.5 billion years old. The paradox states that, given the scale of the universe, favorable conditions for life are likely to have occurred many, many times. So where is everyone? We have good reasons to believe that there must be life out there, but nobody has come to call.

This is an issue that the character Ye Wenjie wrestles with in the first episode of Netflix’s 3 Body Problem. Working at a radio observatory, she does finally receive a message from a member of an alien civilization—telling her they are a pacifist and urging her not to respond to the message or Earth will be attacked.

The series will ultimately offer a detailed, elegant solution to the Fermi Paradox, but we will have to wait until the second season.

Or you can read the second book in Cixin Liu’s series, The Dark Forest. Without spoilers, the explanation set out in the books runs as follows: “The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound.”

Ultimately, everybody is hiding from everyone else. Differential rates of technological progress make an ongoing balance of power impossible, leaving the most rapidly progressing civilizations in a position to wipe out anyone else.

In this ever-threatening environment, those who play the survival game best are the ones who survive longest. We have joined a game which has been going on before our arrival, and the strategy that everyone has learned is to hide. Nobody who knows the game will be foolish enough to contact anyone—or to respond to a message.

Liu has depicted what he calls “the worst of all possible universes,” continuing a trend within Chinese science fiction. He is not saying that our universe is an actual dark forest, with one survival strategy of silence and predation prevailing everywhere, but that such a universe is possible and interesting.

Liu’s dark forest theory is also sufficiently plausible to have reinforced a trend in the scientific discussion in the west—away from worries about mutual incomprehensibility, and towards concerns about direct threat.

We can see its potential influence in the protocol for what to do on first contact that was proposed in 2020 by the prominent astrobiologists Kelly Smith and John Traphagan. “First, do nothing,” they conclude, because doing something could lead to disaster.

In the case of alien contact, Earth should be notified using pre-established signaling rather than anything improvised, they argue. And we should avoid doing anything that might disclose information about who we are. Defensive behavior would show our familiarity with conflict, so that would not be a good idea. Returning messages would give away the location of Earth—also a bad idea.

Again, the Smith and Traphagan thought is not that the dark forest theory is correct. Benevolent aliens really could be out there. The thought is simply that first contact would involve a high civilization-level risk.

This is different from assumptions from a great deal of Russian literature about space of the Soviet era, which suggested that advanced civilizations would necessarily have progressed beyond conflict, and would therefore share a comradely attitude. This no longer seems to be regarded as a plausible guide to protocols for contact.

Misinterpreting Darwin

The interesting thing is that the dark forest theory is almost certainly wrong. Or at least, it is wrong in our universe. It sets up a scenario in which there is a Darwinian process of natural selection, a competition for survival.

Charles Darwin’s account of competition for survival is evidence-based. By contrast, we have absolutely no evidence about alien behavior, or about competition within or between other civilizations. This makes for entertaining guesswork rather than good science, even if we accept the idea that natural selection could operate at group level, at the level of civilizations.

Even if you were to assume the universe did operate in accordance with Darwinian evolution, the argument is questionable. No actual forest is like the dark one. They are noisy places where co-evolution occurs.

Creatures evolve together, in mutual interdependence, and not alone. Parasites depend upon hosts, flowers depend upon birds for pollination. Every creature in a forest depends upon insects. Mutual connection does lead to encounters which are nasty, brutish and short, but it also takes other forms. That is how forests in our world work.

Interestingly, Liu acknowledges this interdependence as a counterpoint to the dark forest theory. The viewer, and the reader, are told repeatedly that “in nature, nothing exists alone”—a quote from Rachel Carson’s Silent Spring (1962). This is a text which tells us that bugs can be our friends and not our enemies.

The four galaxies within Stephan’s Quintet.
There are many galaxies out there, and potentially plenty of life. Image Credit: X-ray: NASA/CXC/SAO

In Liu’s story, this is used to explain why some humans immediately go over to the side of the aliens, and why the urge to make contact is so strong, in spite of all the risks. Ye Wenjie ultimately replies to the alien warning.

The Carson allusions do not reinstate the old Russian idea that aliens will be advanced and therefore comradely. But they do help to paint a more varied and realistic picture than the dark forest theory.

For this reason, the dark forest solution to the Fermi Paradox is unconvincing. The fact that we do not hear anyone is just as likely to indicate that they are too far off, or we are listening in all the wrong ways, or else that there is no forest and nothing else to be heard.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ESO/A. Ghizzi Panizza (www.albertoghizzipanizza.com)

Your Brain Breaks Its Own DNA to Form Memories That Can Last a Lifetime

0

Some memories last a lifetime. The awe of seeing a full solar eclipse. The first smile you shared with your partner. The glimpse of a beloved pet who just passed away in their sleep.

Other memories, not so much. Few of us remember what we had for lunch a week ago. Why do some memories last, while others fade away?

Surprisingly, the answer may be broken DNA and inflammation in the brain. On the surface, these processes sound utterly detrimental to brain function. Broken DNA strands are usually associated with cancer, and inflammation is linked to aging.

But a new study in mice suggests that breaking and repairing DNA in neurons paves the way for long-lasting memories.

We form memories when electrical signals zap through neurons in the hippocampus, a seahorse-shaped region deep inside the brain. The electrical pulses wire groups of neurons together into networks that encode memories. The signals only capture brief snippets of a treasured experience, yet some can be replayed over and over for decades (although they do gradually decay like a broken record).

Like artificial neural networks, which power most of today’s AI, scientists have long thought that rewiring the brain’s connections happens fast and is prone to changes. But the new study found a subset of neurons that alter their connections to encode long-lasting memories.

To do this, strangely, the neurons recruit proteins that normally fend off bacteria and cause inflammation.

“Inflammation of brain neurons is usually considered to be a bad thing, since it can lead to neurological problems such as Alzheimer’s and Parkinson’s disease,” said study author Dr. Jelena Radulovic at Albert Einstein College of Medicine in a press release. “But our findings suggest that inflammation in certain neurons in the brain’s hippocampal region is essential for making long-lasting memories.”

Should I Stay or Should I Go?

We all have a mental scrapbook for our lives. When playing a memory—the whens, wheres, whos, and whats—our minds transport us through time to relive the experience.

The hippocampus is at the heart of this ability. In the 1950s, a man known as H.M. had his hippocampus removed to treat epilepsy. After the surgery, he retained old memories, but could no longer form new ones, suggesting that the brain region is a hotspot for encoding memories.

But what does DNA have to do with the hippocampus or memory?

It comes down to how brain cells are wired. Neurons connect with each other through little bumps called synapses. Like docks between two opposing shores, synapses pump out chemicals to transmit messages from one neuron to another. Depending on the signals, synapses can form a strong connection to their neighboring neurons, or they can dial down communications.

This ability to rewire the brain is called synaptic plasticity. Scientists have long thought it’s the basis of memory. When learning something new, electrical signals flow through neurons triggering a cascade of molecules. These stimulate genes that restructure the synapse to either bump up or decrease their connection with neighbors. In the hippocampus, this “dial” can rapidly change overall neural network wiring to record new memories.

Synaptic plasticity comes at a cost. Synapses are made up of a collection of proteins produced from DNA inside cells. With new learning, electrical signals from neurons cause temporary snips to DNA inside neurons.

DNA damage isn’t always detrimental. It’s been associated with memory formation since 2021. One study found breakage of our genetic material is widespread in the brain and was surprisingly linked to better memory in mice. After learning a task, mice had more DNA breaks in multiple types of brain cells, hinting that the temporary damage may be part of the brain’s learning and memory process.

But the results were only for brief memories. Do similar mechanisms also drive long-term ones?

“What enables brief experiences, encoded over just seconds, to be replayed again and again during a lifetime remains a mystery,” Drs.  Benjamin Kelvington and Ted Abel at the Iowa Neuroscience Institute, who were not involved in the work, wrote in Nature.

The Memory Omelet

To find an answer, the team used a standard method for assessing memory. They hosted mice in different chambers: Some were comfortable; others gave the critters a tiny electrical zap to the paws, just enough that they disliked the habitat. The mice rapidly learned to prefer the comfortable room.

The team then compared gene expression from mice with a recent memory—roughly four days after the test—to those nearly a month after the stay.

Surprisingly, genes involved in inflammation flared up in addition to those normally associated with synaptic plasticity. Digging deeper, the team found a protein called TLR9. Usually known as part of the body’s first line of defense against dangerous bacteria, TLR9 boosts the body’s immune response against DNA fragments from invading bacteria. Here, however, the gene became highly active in neurons inside the hippocampus—especially those with persistent DNA breaks that last for days.

What does it do? In one test, the team deleted the gene encoding TLR9 in the hippocampus. When challenged with the chamber test, these mice struggled to remember the “dangerous” chamber in a long-term memory test compared to peers with the gene intact.

Interestingly, the team found that TLR9 could sense DNA breakage. Deleting the gene prevented mouse cells from recognizing DNA breaks, causing not just loss of long-term memory, but also overall genomic instability in their neurons.

“One of the most important contributions of this study is the insight into the connection between DNA damage…and the persistent cellular changes associated with long-term memory,” wrote Kelvington and Abel.

Memory Mystery

How long-term memories persist remains a mystery. Immune responses are likely just one aspect.

In 2021, the same team found that net-like structures around neurons are crucial for long-term memory. The new study pinpointed TLR9 as a protein that helps form these structures, providing a molecular mechanism between different brain components that support lasting memories.

The results suggest “we are using our own DNA as a signaling system,” Radulovic told Nature, so that we can “retain information over a long time.”

Lots of questions remain. Does DNA damage predispose certain neurons to the formation of memory-encoding networks? And perhaps more pressing, inflammation is often associated with neurodegenerative disorders, such as Alzheimer’s disease. TLR9, which helped the mice remember dangerous chambers in this study, was previously involved in triggering dementia when expressed in microglia, the brain’s immune cells.

“How is it that, in neurons, activation of TLR9 is crucial for memory formation, whereas, in microglia, it produces neurodegeneration—the antithesis of memory?” asked Kelvington and Abel. “What separates detrimental DNA damage and inflammation from that which is essential for memory?”

Image Credit: geralt / Pixabay

This Week’s Awesome Tech Stories From Around the Web (Through April 6)

COMPUTING

To Build a Better AI Supercomputer, Let There Be Light
Will Knight | Wired
“Lightmatter wants to directly connect hundreds of thousands or even millions of GPUs—those silicon chips that are crucial to AI training—using optical links. Reducing the conversion bottleneck should allow data to move between chips at much higher speeds than is possible today, potentially enabling distributed AI supercomputers of extraordinary scale.

ROBOTICS

Apple Has Been Secretly Building Home Robots That Could End up as a New Product Line, Report Says
Aaron Mok | Business Insider
“Apple is in the early stages of looking into making home robots, a move that appears to be an effort to create its ‘next big thing’ after it killed its self-driving car project earlier this year, sources familiar with the matter told Bloomberg. Engineers are looking into developing a robot that could follow users around their houses, Bloomberg reported. They’re also exploring a tabletop at-home device that uses robotics to rotate the display, a more advanced project than the mobile robot.”

SPACE

A Tantalizing ‘Hint’ That Astronomers Got Dark Energy All Wrong
Dennis Overbye | The New York Times
“On Thursday, astronomers who are conducting what they describe as the biggest and most precise survey yet of the history of the universe announced that they might have discovered a major flaw in their understanding of dark energy, the mysterious force that is speeding up the expansion of the cosmos. Dark energy was assumed to be a constant force in the universe, both currently and throughout cosmic history. But the new data suggest that it may be more changeable, growing stronger or weaker over time, reversing or even fading away.”

COMPUTING

How ASML Took Over the Chipmaking Chessboard
Mat Honan and James O’Donnell | MIT Technology Review
“When asked what he thought might eventually cause Moore’s Law to finally stall out, van den Brink rejected the premise entirely. ‘There’s no reason to believe this will stop. You won’t get the answer from me where it will end,’ he said. ‘It will end when we’re running out of ideas where the value we create with all this will not balance with the cost it will take. Then it will end. And not by the lack of ideas.'”

TRANSPORTATION

The Very First Jet Suit Grand Prix Takes Off in Dubai
Mike Hanlon | New Atlas
“A new sport kicked away this month when the first ever jet-suit race was held in Dubai. Each racer wore an array of seven 130-hp jet engines (two on each arm and three in the backpack for a total 1,050 hp) that are controlled by hand-throttles. After that, the pilots use the three thrust vectors to gain lift, move forward and try to stay above ground level while negotiating the course…faster than anyone else.

ROBOTICS

Toyota’s Bubble-ized Humanoid Grasps With Its Whole Body
Evan Ackerman | IEEE Spectrum
“Many of those motions look very human-like, because this is how humans manipulate things. Not to throw too much shade at all those humanoid warehouse robots, but as is pointed out in the video above, using just our hands outstretched in front of us to lift things is not how humans do it, because using other parts of our bodies to provide extra support makes lifting easier.”

FUTURE

‘A Brief History of the Future’ Offers a Hopeful Antidote to Cynical Tech Takes
Devin Coldewey | TechCrunch
“The future, he said, isn’t just what a Silicon Valley publicist tells you, or what ‘Big Dystopia’ warns you of, or even what a TechCrunch writer predicts. In the six-episode series, he talks with dozens of individuals, companies and communities about how they’re working to improve and secure a future they may never see. From mushroom leather to ocean cleanup to death doulas, Wallach finds people who see the same scary future we do but are choosing to do something about it, even if that thing seems hopelessly small or naïve.”

TECH

This AI Startup Wants You to Talk to Houses, Cars, and Factories
Steven Levy | Wired
“We’ve all been astonished at how chatbots seem to understand the world. But what if they were truly connect to the real world? What if the dataset behind the chat interface was physical reality itself, captured in real time by interpreting the input of billions of sensors sprinkled around the globe? That’s the idea behind Archetype AI, an ambitious startup launching today. As cofounder and CEO Ivan Poupyrev puts it, ‘Think of ChatGPT, but for physical reality.'”

FUTURE

How One Tech Skeptic Decided AI Might Benefit the Middle Class
Steve Lohr | The New York Times
“David Autor seems an unlikely AI optimist. The labor economist at the Massachusetts Institute of Technology is best known for his in-depth studies showing how much technology and trade have eroded the incomes of millions of American workers over the years. But Mr. Autor is now making the case that the new wave of technology—generative artificial intelligence, which can produce hyper-realistic images and video and convincingly imitate humans’ voices and writing—could reverse that trend.”

Image Credit: Harole Ethan / Unsplash

Life’s Origins: How Fissures in Hot Rocks May Have Kickstarted Biochemistry

0

How did the building blocks of life originate?

The question has long vexed scientists. Early Earth was dotted with pools of water rich in chemicals—a primordial soup. Yet biomolecules supporting life emerged from the mixtures, setting the stage for the appearance of the first cells.

Life was kickstarted when two components formed. One was a molecular carrier—like, for example, DNA—to pass along and remix genetic blueprints. The other component was made up of proteins, the workhorses and structural elements of the body.

Both biomolecules are highly complex. In humans, DNA has four different chemical “letters,” called nucleotides, whereas proteins are made of 20 types of amino acids. The components have distinct structures, and their creation requires slightly different chemistries. The final products need to be in large enough amounts to string them together into DNA or proteins.

Scientists can purify the components in the lab using additives. But it begs the question: How did it happen on early Earth?

The answer, suggests Dr. Christof Mast, a researcher at Ludwig Maximilians University of Munich, may be cracks in rocks like those occurring in the volcanoes or geothermal systems that were abundant on early Earth. It’s possible that temperature differences along the cracks naturally separate and concentrate biomolecule components, providing a passive system to purify biomolecules.

Inspired by geology, the team developed heat flow chambers roughly the size of a bank card, each containing minuscule fractures with a temperature gradient. When given a mixture of amino acids or nucleotides—a “prebiotic mix”—the components readily separated.

Adding more chambers further concentrated the chemicals, even those that were similar in structure. The network of fractures also enabled amino acids to bond, the first step towards creating a functional protein.

“Systems of interconnected thin fractures and cracks…are thought to be ubiquitous in volcanic and geothermal environments,” wrote the team. By enriching the prebiotic chemicals, such systems could have “provided a steady driving force for a natural origins-of-life laboratory.”

Brewing Life

Around four billion years ago, Earth was a hostile environment, pummeled by meteorites and rife with volcanic eruptions. Yet somehow among the chaos, chemistry generated the first amino acids, nucleotides, fatty lipids, and other building blocks that support life.

Which chemical processes contributed to these molecules is up for debate. When each came along is also a conundrum. Like a “chicken or egg” problem, DNA and RNA direct the creation of proteins in cells—but both genetic carriers also require proteins to replicate.

One theory suggest sulfidic anions, which are molecules that were abundant in early Earth’s lakes and rivers, could be the link. Generated in volcanic eruptions, once dissolved into pools of water they can speed up chemical reactions that convert prebiotic molecules into RNA. Dubbed the “RNA world” hypothesis, the idea suggests that RNA was the first biomolecule to grace Earth because it can carry genetic information and speed up some chemical reactions.

Another idea is meteor impacts on early Earth generated nucleotides, lipids, and amino acids simultaneously, through a process that includes two abundant chemicals—one from meteors and another from Earth—and a dash of UV light.

But there’s one problem: Each set of building blocks requires a different chemical reaction. Depending on slight differences in structure or chemistry, it’s possible one geographic location might have skewed towards one type of prebiotic molecule over another.

How? The new study, published in Nature, offers an answer.

Tunnel Networks

Lab experiments mimicking early Earth usually start with well-defined ingredients that have already been purified. Scientists also clean up intermediate side-products, especially for multiple chemical reaction steps.

The process often results in “vanishingly small concentrations of the desired product,” or its creation can even be completely inhibited, wrote the team. The reactions also require multiple spatially separated chambers, which hardly resembles Earth’s natural environment.

The new study took inspiration from geology. Early Earth had complex networks of water-filled cracks found in a variety of rocks in volcanos and geothermal systems. The cracks, generated by overheating rocks, formed natural “straws” that could potentially filter a complex mix of molecules using a heat gradient.

Each molecule favors a preferred temperature based on its size and electrical charge. When exposed to different temperatures, it naturally moves towards its ideal pick. Called thermophoresis, the process separates a soup of ingredients into multiple distinct layers in one step.

The team mimicked a single thin rock fracture using a heat flow chamber. Roughly the size of a bank card, the chamber had tiny cracks 170 micrometers across, about the width of a human hair. To create a temperature gradient, one side of the chamber was heated to 104 degrees Fahrenheit and the other end chilled to 77 degrees Fahrenheit.

In a first test, the team added a mix of prebiotic compounds that included amino acids and DNA nucleotides into the chamber. After 18 hours, the components separated into layers like tiramisu. For example, glycine—the smallest of amino acids—became concentrated towards the top, whereas other amino acids with higher thermophoretic strength stuck to the bottom. Similarly, DNA letters and other life-sustaining chemicals also separated in the cracks, with some enriched by up to 45 percent.

Although promising, the system didn’t resemble early Earth, which had highly interconnected cracks varying in size. To better mimic natural conditions, the team next strung up three chambers, with the first branching into two others. This was roughly 23 times more efficient at enriching prebiotic chemicals than a single chamber.

Using a computer simulation, the team then modeled the behavior of a 20-by-20 interlinked chamber system, using a realistic flow rate of prebiotic chemicals. The chambers further enriched the brew, with glycine enriching over 2,000 times more than another amino acids.

Chemical Reactions

Cleaner ingredients are a great start for the formation of complex molecules. But lots of chemical reaction require additional chemicals, which also need to be enriched. Here, the team zeroed in on a reaction stitching two glycine molecules together.

At the heart is trimetaphosphate (TMP), which helps guide the reaction. TMP is especially interesting for prebiotic chemistry, and it was scarce on early Earth, explained the team, which “makes its selective enrichment critical.” A single chamber increased TMP levels when mixed with other chemicals.

Using a computer simulation, a TMP and glycine mix increased the final product—a doubled glycine—by five orders of magnitude.

“These results show that otherwise challenging prebiotic reactions are massively boosted” with heat flows that selectively enrich chemicals in different regions, wrote the team.

In all, they tested over 50 prebiotic molecules and found the fractures readily separated them. Because each crack can have a different mix of molecules, it could explain the rise of multiple life-sustaining building blocks.

Still, how life’s building blocks came together to form organisms remains mysterious. Heat flows and rock fissures are likely just one piece of the puzzle. The ultimate test will be to see if, and how, these purified prebiotics link up to form a cell.

Image Credit: Christof B. Mast

Quantum Computers Take a Major Step With Error Correction Breakthrough

0

For quantum computers to go from research curiosities to practically useful devices, researchers need to get their errors under control. New research from Microsoft and Quantinuum has now taken a major step in that direction.

Today’s quantum computers are stuck firmly in the “noisy intermediate-scale quantum” (NISQ) era. While companies have had some success stringing large numbers of qubits together, they are highly susceptible to noise which can quickly degrade their quantum states. This makes it impossible to carry out computations with enough steps to be practically useful.

While some have claimed that these noisy devices could still be put to practical use, the consensus is that quantum error correction schemes will be vital for the full potential of the technology to be realized. But error correction is difficult in quantum computers because reading the quantum state of a qubit causes it to collapse.

Researchers have devised ways to get around this using error correction codes that spread each bit of quantum information across multiple physical qubits to create what is known as a logical qubit. This provides redundancy and makes it possible to detect and correct errors in the physical qubits without impacting the information in the logical qubit.

The challenge is that, until recently, it was assumed it could take roughly 1,000 physical qubits to create each logical qubit. Today’s largest quantum processors only have around that many qubits, suggesting that creating enough logical qubits for meaningful computations was still a distant goal.

That changed last year when researchers from Harvard and startup QuEra showed they could generate 48 logical qubits from just 280 physical ones. And now the collaboration between Microsoft and Quantinuum has gone a step further by showing that they can not only create logical qubits but can actually use them to suppress error rates by a factor of 800 and carry out more than 14,000 experimental routines without a single error.

“What we did here gives me goosebumps,” Microsoft’s Krysta Svore told New Scientist. “We have shown that error correction is repeatable, it is working, and it is reliable.”

The researchers were working with Quantinuum’s H2 quantum processor, which relies on trapped-ion technology and is relatively small at just 32 qubits. But by applying error correction codes developed by Microsoft, they were able to generate four logical qubits that only experienced an error every 100,000 runs.

One of the biggest achievements, the Microsoft team notes in a blog post, was the fact that they were able to diagnose and correct errors without destroying the logical qubits. This is thanks to an approach known as “active syndrome extraction” which is able to read information about the nature of the noise impacting qubits, rather than their state, Svore told IEEE Spectrum.

However, the error correction scheme had a shelf life. When the researchers carried out multiple operations on a logical qubit, followed by error correction, they found that by the second round the error rates were only half of those found in the physical qubits and by the third round there was no statistically significant impact.

And impressive as the results are, the Microsoft team points out in their blog post that creating truly powerful quantum computers will require logical qubits that make errors only once every 100 million operations.

Regardless, the result marks a massive jump in capabilities for error correction, which Quantinuum claimed in a press release represents the beginning of a new era in quantum computing. While that might be jumping the gun slightly, it certainly suggests that people’s timelines for when we will achieve fault-tolerant quantum computing may need to be updated.

Image Credit: Quantinuum H2 quantum computer / Quantinuum

Environmental DNA Is Everywhere. Scientists Are Gathering It All.

0

In the late 1980s, at a federal research facility in Pensacola, Florida, Tamar Barkay used mud in a way that proved revolutionary in a manner she could never have imagined at the time: a crude version of a technique that is now shaking up many scientific fields. Barkay had collected several samples of mud—one from an inland reservoir, another from a brackish bayou, and a third from a low-lying saltwater swamp. She put these sediment samples in glass bottles in the lab, and then added mercury, creating what amounted to toxic sludge.

At the time, Barkay worked for the Environmental Protection Agency and she wanted to know how microorganisms in mud interact with mercury, an industrial pollutant, which required an understanding of all the organisms in a given environment—not just the tiny portion that could be successfully grown in petri dishes in the lab. But the underlying question was so basic that it remains one of those fundamental driving queries across biology. As Barkay, who is now retired, put it in a recent interview from Boulder, Colorado: “Who is there?” And, just as important, she added: “What are they doing there?”

Such questions are still relevant today, asked by ecologists, public health officials, conservation biologists, forensic practitioners, and those studying evolution and ancient environments—and they drive shoe-leather epidemiologists and biologists to far-flung corners of the world.

The 1987 paper Barkay and her colleagues published in the Journal of Microbiological Methods outlined a method“Direct Environmental DNA Extraction”—that would allow researchers to take a census. It was a practical tool, albeit a rather messy one, for detecting who was out there. Barkay used it for the rest of her career.

Today, the study gets cited as an early glimpse of eDNA, or environmental DNA, a relatively inexpensive, widespread, potentially automated way to observe the diversity and distribution of life. Unlike previous techniques, which could identify DNA from, say, a single organism, the method also collects the swirling cloud of other genetic material that surrounds it. In recent years, the field has grown significantly. “It’s got its own journal,” said Eske Willerslev, an evolutionary geneticist at the University of Copenhagen. “It’s got its own society, scientific society. It has become an established field.”

“We’re all flaky, right? There’s bits of cellular debris sloughing off all the time.”

eDNA serves as a surveillance tool, offering researchers a means of detecting the seemingly undetectable. By sampling eDNA, or mixtures of genetic material—that is, fragments of DNA, the blueprint of life—in water, soil, ice cores, cotton swabs, or practically any environment imaginable, even thin air, it is now possible to search for a specific organism or assemble a snapshot of all the organisms in a given place. Instead of setting up a camera to see who crosses the beach at night, eDNA pulls that information out of footprints in the sand. “We’re all flaky, right?” said Robert Hanner, a biologist at the University of Guelph in Canada. “There’s bits of cellular debris sloughing off all the time.”

As a method for confirming the presence of something, eDNA isn’t failproof. For instance, the organism detected in eDNA might not actually live in the location where the sample was collected; Hanner gave the example of a passing bird, a heron, that ate a salamander and then pooped out some of its DNA, which could be one reason signals of the amphibian are present in some areas where they’ve never been physically found.

Still, eDNA has the ability to help sleuth out genetic traces, some of which slough off in the environment, offering a thrilling—and potentially chilling—way to collect information about organisms, including humans, as they go about their everyday business.

The conceptual basis for eDNA—pronounced EE-DEE-EN-AY, not ED-NUH—dates back a hundred years, before the advent of so-called molecular biology, and it is often attributed to Edmond Locard, a French criminologist working in the early 20th century. In a series of papers published in 1929, Locard proposed a principle: Every contact leaves a trace. In essence, eDNA brings Locard’s principle to the 21st century.

For the first several decades, the field that became eDNA—Barkay’s work in the 1980s included—focused largely on microbial life. Looking back at its evolution, eDNA appeared slow to claw its way out of the proverbial mud.

It wasn’t until 2003 that the method turned up a vanished ecosystem. Led by Willerslev, the 2003 study pulled ancient DNA from less than a teaspoon of sediment, demonstrating for the first time the feasibility of detecting larger organisms with the technique, including plants and woolly mammoths. In the same study, sediment collected in a New Zealand cave (which notably had not been frozen) revealed an extinct bird: the moa. What is perhaps most remarkable is that these applications for studying ancient DNA stemmed from a prodigious amount of dung dropped on the ground hundreds of thousands of years ago.

Willerslev had first come up with the idea a few years earlier while contemplating a more recent pile of dung: In between his master’s degree and Ph.D. in Copenhagen, he found himself at loose ends, struggling to obtain bones, skeletal remains, or other physical specimens to study. But one autumn, he gazed out the window at “a dog taking a crap on the street,” he recalled. The scene prompted him to think about the DNA in feces, and how it washed away with rain, leaving no visible trace. But Willerslev wondered, “‘Could it be that the DNA could survive?’ That’s what I then set up to try to find out.”

The paper demonstrated the remarkable persistence of DNA, which, he said, survives in the environment for much longer than previous estimates suggested. Willerslev has since analyzed eDNA in frozen tundra in modern-day Greenland, dating back 2 million years ago, and he is working on samples from Angkor Wat, the enormous temple complex in Cambodia believed to have been built in the 12th century. “It should be the worst DNA preservation you can imagine,” he said. “I mean, it’s hot and humid.”

But, he said, “we can get DNA out.”

eDNA has the ability to help sleuth out genetic traces, offering a thrilling—and potentially chilling—way to collect information about organisms as they go about their everyday business.

Willerslev is now hardly alone in seeing a potential tool with seemingly limitless applications—especially now as advances enable researchers to sequence and analyze larger quantities of genetic information. “It’s an open window for many, many things,” he said, “and much more than I can think of, I’m sure.” It was not just ancient mammoths; eDNA could reveal present-day organisms hiding in our midst.

Scientists use eDNA to track creatures of all shapes and sizes, be it a single species, such as tiny bits of invasive algae, eels in Loch Ness, or a sightless sand-dwelling mole that hasn’t been seen in nearly 90 years; researchers sample entire communities, say, by looking at the eDNA found on wildflower blossoms or the eDNA blowing in the wind as a proxy for all the visiting birds and bees and other animal pollinators.

The next evolutionary leap forward in eDNA’s history took shape around the search for organisms currently living in earth’s aquatic environments. In 2008, a headline appeared: “Water retains DNA memory of hidden species.” It came not from the supermarket tabloid, but the respected trade publication Chemistry World, describing work by French researcher Pierre Taberlet and his colleagues. The group sought out brown-and-green bullfrogs, which can weigh more than 2 pounds and, because they mow down everything in their path, are considered an invasive species in western Europe. Finding bullfrogs usually involved skilled herpetologists scanning shorelines with binoculars who then returned after sunset to listen for their calls. The 2008 paper suggested an easier way—a survey that required a lot less personnel.

“You could get DNA from that species directly out of the water,” said Philip Thomsen, a biologist at Aarhus University (who was not involved in the study). “And that really kickstarted the field of environmental DNA.”

Frogs can be hard to detect, and they are not, of course, the only species that eludes more traditional, boots-on-the-ground detection. Thomsen began work on another organism that notoriously confounds measurement: fish. Counting fish is sometimes said to vaguely resemble counting trees—except they’re free-roaming, in dark places, and fish counters are doing their tally while blindfolded. Environmental DNA dropped the blindfold. One review of published literature on the technology—though it came with caveats, including imperfect and imprecise detections or details on abundance—found that eDNA studies on freshwater and marine fish and amphibians outnumbered terrestrial counterparts 7:1.

In 2011, Thomsen, then a Ph.D. candidate in Willerslev’s lab, published a paper demonstrating that the method could detect rare and threatened species, such as those in low abundance in Europe, including amphibians, mammals like the otter, crustaceans, and dragonflies. “We showed that only, like, a shot glass of water really was enough to detect these organisms,” he told Undark. It was clear: The method had direct applications in conservation biology for the detection and monitoring of species.

In 2012, the journal Molecular Ecology published a special issue on eDNA, and Taberlet and several colleagues outlined a working definition of eDNA as any DNA isolated from environmental samples. The method described two similar but slightly different approaches: One can answer a yes or no question: Is the bullfrog (or whatever) present or not? It does so by scanning the metaphoric barcode, short sequences of DNA that are particular to a species or family, called primers; the checkout scanner is a common technique called quantitative real-time polymerase chain reaction, or qPCR.

Scientists use eDNA to track creatures of all shapes and sizes, be it tiny bits of invasive algae, eels in Loch Ness, or a sightless sand-dwelling mole that hasn’t been seen in nearly 90 years.

Another approach, commonly known as DNA metabarcoding, essentially spits out a list of organisms present in a given sample. “You sort of ask the question, what is here?” Thomsen said. “And then you get all of the known things, but you also get some surprises, right? Because there were some species that you didn’t know were actually present.”

One aims to find the needle in a haystack; the other attempts to reveal the whole haystack. eDNA differs from more traditional sampling techniques where organisms, like fish, are caught, manipulated, stressed, and sometimes killed. The data obtained are objective; it’s standardized and unbiased.

“eDNA, one way or the other, is going to stay as one of the important methodologies in biological sciences,” said Mehrdad Hajibabaei, a molecular biologist at University of Guelph, who pioneered the metabarcoding approach, and who traced fish some 9,800 feet under the Labrador Sea. “Every day I see something bubbling up that didn’t occur to me.”

In recent years, the field of eDNA has expanded. The method’s sensitivity allows researchers to sample previously out-of-reach environments, for example, capturing eDNA from the air—an approach that highlights eDNA’s promises and its potential pitfalls. Airborne eDNA appears to circulate on a global dust belt, suggesting its abundance and omnipresence, and it can be filtered and analyzed to monitor plants and terrestrial animals. But eDNA blowing in the wind can lead to inadvertent contamination.

In 2019, Thomsen, for instance, left two bottles of ultra-pure water out in the open—one in a grassland, and the other near a marine harbor. After a few hours, the water contained detectable eDNA associated with birds and herring, suggesting that traces of non-terrestrial species settled into the samples; the organisms obviously did not inhabit the bottles. “So it must come from the air,” Thomsen told Undark. The results suggest a two-fold problem: For one, trace evidence can move around, where two organisms that come into contact can then tote around the other’s DNA, and just because certain DNA is present doesn’t mean that the species is actually there.

Moreover, there’s also no guarantee that the presence of eDNA indicates that a species is alive, and field surveys are still needed, he said, to understand a species’ breeding success, its health, or the status of its habitat. So far, then, eDNA does not necessarily replace physical observations or collections. In another study, in which Thomsen’s group collected eDNA on flowers to look for pollinating birds, more than half of the eDNA reported in the paper came from humans, contamination that potentially muddied the results and made it harder to detect the pollinators in question.

Similarly, in May 2023, a University of Florida team that previously studied sea turtles by the eDNA traces left as they crawl along the beach published a paper that turned up human DNA. The samples were intact enough to detect key mutations that might someday be used to identify individual people, suggesting that the biological surveillance also raised unanswered questions about ethical testing on humans and informed consent. If eDNA served as a seine net, then it indiscriminately swept up information about biodiversity and inevitably ended up with, as the UF team’s paper put it, “human genetic by-catch.”

While the privacy issues around footprints in the sand, so far, appear to exist mostly in the realm of hypothetical, the use of eDNA in legal litigation relating to wildlife is not only possible but already a reality. It’s also being used in criminal investigations: In 2021, for instance, a group of Chinese researchers reported that eDNA collected off a suspected murderer’s pants had, contrary to his claims, revealed that he’d likely been to the muddy canal where a dead body had been found.

The concerns about off-target eDNA, in terms of accuracy and its reach into human medicine and forensics, highlight another, much broader, shortcoming. As Hanner at the University of Guelph described the problem: “Our regulatory frameworks and policy tend to lag at least a decade or more behind the science.”

“Every day I see something bubbling up that didn’t occur to me.”

Today, there are countless potential regulatory applications for water quality monitoring, evaluating environmental impact (including offshore wind farms and oil and gas drilling to more run-of-the-mill strip mall development), species management, and enforcement of the Endangered Species Act. In a civil court case filed in 2021, the US Fish and Wildlife Service evaluated whether an imperiled fish existed in a particular watershed, using eDNA and more traditional sampling, and found that they did not. The courts said the agency’s lack of protections for that watershed were justified. The issue does not seem to be whether eDNA stood up in court; it did. “But you really can’t say that something does not exist in an environment,” said Hajibabaei.

He recently highlighted the issue of validation: eDNA infers a result, but needs more established criteria for confirming that these results are actually true (that an organism is actually present or absent, or in a certain quantity). A series of special meetings for scientists worked to address these issues of standardization, which he said include protocols, chain of custody, and criteria for data generation and analysis. In a review of eDNA studies, Hajibabaei and his colleagues found that the field is saturated with one-offs, or proof-of-concept studies attempting to show that eDNA analyses work. Research remains overwhelmingly siloed in academia.

As such, practitioners hoping to use eDNA in an applied contexts sometimes ask for the moon. Does the species exist in certain location? For instance, Hajibabaei said, someone recently asked him if he could totally refute the presence of a parasite, proving that it had not appeared in an aquaculture farm. “And I say, ‘Look, there is no way that I can say that is 100 percent.’”

Even with a rigorous analytic framework, he said, the issues with false negatives and false positives are particularly difficult to resolve without doing one of the things eDNA obviates—more traditional collection and manual inspection. Despite the limitations, a handful of companies are already starting to commercialize the technique. For instance, future applications could help a company confirm whether the bridge it is building will harm any locally endangered animals; an aquaculture outfit determine if the waters where it farms its fish are infested with sea lice; or a landowner who is curious whether new plantings are attracting a wider range of native bees.

The problem is rather fundamental given eDNA’s reputation as an indirect way of detecting the undetectable—or as a workaround in contexts when it’s simply not possible to dip a net and catch all the organisms in the sea.

“It is very hard to validate some of these scenarios,” Hajibabaei said. “And that’s basically the nature of the beast.”

eDNA opens up a lot of possibilities, answering a question originally posed by Barkay (and no doubt many others): “Who is there?” But increasingly it’s providing hints that get at the “What are they doing there?” question, too. Elizabeth Clare, a professor of biology at York University in Toronto, studies biodiversity. She said she has observed bats roosting in one spot during the day, but, by collecting airborne eDNA, she could also infer where bats socialize at night. In another study, domesticated dog eDNA turned up in red fox scat. The two canids did not appear to be interbreeding, but researchers did wonder if their closeness had led to confusion, or cross-contamination, before ultimately settling on another explanation: Foxes apparently ate dog poop.

So while eDNA does not inherently reveal animal behavior, by some accounts the field is making strides towards providing clues as to what an organism might be doing, and how it’s interacting with other species, in a given environment—gleaning information about health without directly observing behavior.

Take another possibility: large-scale biomonitoring. Indeed, for the last three years, more people than ever before have participated in a bold experiment that is already up and running: the collection of environmental samples from public sewers to track viral Covid-19 particles and other organisms that infect humans. Technically, wastewater sampling involves a related approach called eRNA, because some viruses only have genetic information stored in the form of RNA, rather than DNA. Still, the same principles apply. (Studies also suggest RNA, which determines which proteins an organism is expressing, could be used to assess ecosystem health; organisms that are healthy may express entirely different proteins compared to those that are stressed.) In addition to monitoring the prevalence of diseases, wastewater surveillance demonstrates how an existing infrastructure designed to do one thing—sewers were designed to collect waste—could be fashioned into a powerful tool for studying something else, like detecting pathogens.

Clare has a habit of doing just that. “I personally am one of those people who tends to use tools—not the way they were intended,” she said. Clare was among the researchers who noticed a gap in the research: There was a lot less eDNA work done on terrestrial organisms. So, she began working with what might be called a natural filter, that is worms that suck blood from mammals. “It’s a lot easier to collect 1,000 leeches than it is to find the animals. But they have blood-meals inside them and the blood carries the DNA of the animals they interacted with,” she said. “It’s like having a bunch of field assistants out surveying for you.” Then, one of her students thought the same thing for dung beetles, which are even easier to collect.

Clare is now spearheading a new application for another continuous monitoring system—leveraging existing air-quality monitors that measure pollutants, such as fine particulate matter, while also simultaneously vacuuming eDNA out of the sky. In late 2023, she only had a small sample set, but had already found that, as a byproduct of routine air quality monitoring, these preexisting tools doubled as filters for the material she is after. It was, more or less, a regulated, transcontinental network collecting samples in a very consistent way over long periods of time. “You could then use it to build up time series and high-resolution data on entire continents,” she said.

In the UK alone, Clare said, there are an estimated 150 different sites sucking a known quantity of air, every week, all year long, which amount to some 8,000 measurements a year. Clare and her co-authors recently analyzed at a tiny subset of these—17 measurements from two locations—and were able to identify more than 180 different taxonomic groups, more than 80 different kinds of plants and fungi, 26 different species of mammal, 34 different species of birds, plus at least 35 kinds of insects.

Certainly, other long-term ecological research sites exist. The US has a network of such facilities. But their scope of study does not include a globally distributed infrastructure that measures biodiversity constantly—including the passage of migrating birds overhead to the expansion and contraction of species with climate change. Arguably, eDNA will likely complement, rather than supplant, the distributed network of people, who record real-time, high-resolution, tempo-spatial observations on websites such as eBird or iNaturalist. Like a fuzzy image of an entirely new galaxy coming into view, the current resolution remains low.

“It’s sort of a generalized collection system, which is pretty much unheard of in biodiversity science,” said Clare. She was referring to the capacity to pull eDNA signals out of thin air, but the sentiment spoke to the method as a whole: “It’s not perfect,” she said, “but there’s nothing else that really does that.”

This article was originally published on Undark. Read the original article.

Image Credit: Undark + DALL-E

This Robot Predicts When You’ll Smile—Then Grins Back Right on Cue

0

Comedy clubs are my favorite weekend outings. Rally some friends, grab a few drinks, and when a joke lands for us all—there’s a magical moment when our eyes meet, and we share a cheeky grin.

Smiling can turn strangers into the dearest of friends. It spurs meet-cute Hollywood plots, repairs broken relationships, and is inextricably linked to fuzzy, warm feelings of joy.

At least for people. For robots, their attempts at genuine smiles often fall into the uncanny valley—close enough to resemble a human, but causing a touch of unease. Logically, you know what they’re trying to do. But gut feelings tell you something’s not right.

It may be because of timing. Robots are trained to mimic the facial expression of a smile. But they don’t know when to turn the grin on. When humans connect, we genuinely smile in tandem without any conscious planning. Robots take time to analyze a person’s facial expressions to reproduce a grin. To a human, even milliseconds of delay raises hair on the back of the neck—like a horror movie, something feels manipulative and wrong.

Last week, a team at Columbia University showed off an algorithm that teaches robots to share a smile with their human operators. The AI analyzes slight facial changes to predict its operators’ expressions about 800 milliseconds before they happen—just enough time for the robot to grin back.

The team trained a soft robotic humanoid face called Emo to anticipate and match the expressions of its human companion. With a silicone face tinted in blue, Emo looks like a 60s science fiction alien. But it readily grinned along with its human partner on the same “emotional” wavelength.

Humanoid robots are often clunky and stilted when communicating with humans, wrote Dr. Rachael Jack at the University of Glasgow, who was not involved in the study. ChatGPT and other large language algorithms can already make an AI’s speech sound human, but non-verbal communications are hard to replicate.

Programming social skills—at least for facial expression—into physical robots is a first step toward helping “social robots to join the human social world,” she wrote.

Under the Hood

From robotaxis to robo-servers that bring you food and drinks, autonomous robots are increasingly entering our lives.

In London, New York, Munich, and Seoul, autonomous robots zip through chaotic airports offering customer assistance—checking in, finding a gate, or recovering lost luggage. In Singapore, several seven-foot-tall robots with 360-degree vision roam an airport flagging potential security problems. During the pandemic, robot dogs enforced social distancing.

But robots can do more. For dangerous jobs—such as cleaning the wreckage of destroyed houses or bridges—they could pioneer rescue efforts and increase safety for first responders. With an increasingly aging global population, they could help nurses to support the elderly.

Current humanoid robots are cartoonishly adorable. But the main ingredient for robots to enter our world is trust. As scientists build robots with increasingly human-like faces, we want their expressions to match our expectations. It’s not just about mimicking a facial expression. A genuine shared “yeah I know” smile over a cringe-worthy joke forms a bond.

Non-verbal communications—expressions, hand gestures, body postures—are tools we use to express ourselves. With ChatGPT and other generative AI, machines can already “communicate in video and verbally,” said study author Dr. Hod Lipson to Science.

But when it comes to the real world—where a glance, a wink, and smile can make all the difference—it’s “a channel that’s missing right now,” said Lipson. “Smiling at the wrong time could backfire. [If even a few milliseconds too late], it feels like you’re pandering maybe.”

Say Cheese

To get robots into non-verbal action, the team focused on one aspect—a shared smile. Previous studies have pre-programmed robots to mimic a smile. But because they’re not spontaneous, it causes a slight but noticeable delay and makes the grin look fake.

“There’s a lot of things that go into non-verbal communication” that are hard to quantify, said Lipson. “The reason we need to say ‘cheese’ when we take a photo is because smiling on demand is actually pretty hard.”

The new study focused on timing.

The team engineered an algorithm that anticipates a person’s smile and makes a human-like animatronic face grin in tandem. Called Emo, the robotic face has 26 gears—think artificial muscles—enveloped in a stretchy silicone “skin.” Each gear is attached to the main robotic “skeleton” with magnets to move its eyebrows, eyes, mouth, and neck. Emo’s eyes have built-in cameras to record its environment and control its eyeball movements and blinking motions.

By itself, Emo can track its own facial expressions. The goal of the new study was to help it interpret others’ emotions. The team used a trick any introverted teenager might know: They asked Emo to look in the mirror to learn how to control its gears and form a perfect facial expression, such as a smile. The robot gradually learned to match its expressions with motor commands—say, “lift the cheeks.” The team then removed any programming that could potentially stretch the face too much, injuring to the robot’s silicon skin.

“Turns out…[making] a robot face that can smile was incredibly challenging from a mechanical point of view. It’s harder than making a robotic hand,” said Lipson. “We’re very good at spotting inauthentic smiles. So we’re very sensitive to that.”

To counteract the uncanny valley, the team trained Emo to predict facial movements using videos of humans laughing, surprised, frowning, crying, and making other expressions. Emotions are universal: When you smile, the corners of your mouth curl into a crescent moon. When you cry, the brows furrow together.

The AI analyzed facial movements of each scene frame-by-frame. By measuring distances between the eyes, mouth, and other “facial landmarks,” it found telltale signs that correspond to a particular emotion—for example, an uptick of the corner of your mouth suggests a hint of a smile, whereas a downward motion may descend into a frown.

Once trained, the AI took less than a second to recognize these facial landmarks. When powering Emo, the robot face could anticipate a smile based on human interactions within a second, so that it grinned with its participant.

To be clear, the AI doesn’t “feel.” Rather, it behaves as a human would when chuckling to a funny stand-up with a genuine-seeming smile.

Facial expressions aren’t the only cues we notice when interacting with people. Subtle head shakes, nods, raised eyebrows, or hand gestures all make a mark. Regardless of cultures, “ums,” “ahhs,” and “likes”—or their equivalents—are integrated into everyday interactions. For now, Emo is like a baby who learned how to smile. It doesn’t yet understand other contexts.

“There’s a lot more to go,” said Lipson. We’re just scratching the surface of non-verbal communications for AI. But “if you think engaging with ChatGPT is interesting, just wait until these things become physical, and all bets are off.”

Image Credit: Yuhang Hu, Columbia Engineering via YouTube

This Week’s Awesome Tech Stories From Around the Web (Through March 30)

COMPUTING

The Best Qubits for Quantum Computing Might Just Be Atoms
Philip Ball | Quanta
“In the search for the most scalable hardware to use for quantum computers, qubits made of individual atoms are having a breakout moment. …’We believe we can pack tens or even hundreds of thousands in a centimeter-scale device,’ [Mark Saffman, a physicist at the University of Wisconsin] said.”

ARTIFICIAL INTELLIGENCE

AI Chatbots Are Improving at an Even Faster Rate Than Computer Chips
Chris Stokel-Walker | New Scientist
“Besiroglu and his colleagues analyzed the performance of 231 LLMs developed between 2012 and 2023 and found that, on average, the computing power required for subsequent versions of an LLM to hit a given benchmark halved every eight months. That is far faster than Moore’s law, a computing rule of thumb coined in 1965 that suggests the number of transistors on a chip, a measure of performance, doubles every 18 to 24 months.”

FUTURE

How AI Could Explode the Economy
Dylan Matthews | Vox
“Imagine everything humans have achieved since the days when we lived in caves: wheels, writing, bronze and iron smelting, pyramids and the Great Wall, ocean-traversing ships, mechanical reaping, railroads, telegraphy, electricity, photography, film, recorded music, laundry machines, television, the internet, cellphones. Now imagine accomplishing 10 times all that—in just a quarter century. This is a very, very, very strange world we’re contemplating. It’s strange enough that it’s fair to wonder whether it’s even possible.”

DIGITAL MEDIA

What’s Next for Generative Video
Will Douglas Heaven | MIT Technology Review
“The first batch of models that could turn text into video appeared in late 2022, from companies including Meta, Google, and video-tech startup Runway. It was a neat trick, but the results were grainy, glitchy, and just a few seconds long. Fast-forward 18 months, and the best of Sora’s high-definition, photorealistic output is so stunning that some breathless observers are predicting the death of Hollywood. …As we continue to get to grips what’s ahead—good and bad—here are four things to think about.”

SENSORS

Salt-Sized Sensors Mimic the Brain
Gwendolyn Rak | IEEE Spectrum
“To gain a better understanding of the brain, why not draw inspiration from it? At least, that’s what researchers at Brown University did, by building a wireless communications system that mimics the brain using an array of tiny silicon sensors, each the size of a grain of sand. The researchers hope that the technology could one day be used in implantable brain-machine interfaces to read brain activity.”

ROBOTICS

Understanding Humanoid Robots
Brian Heater | TechCrunch
“A lot of smart people have faith in the form factor and plenty of others remain skeptical. One thing I’m confident saying, however, is that whether or not future factories will be populated with humanoid robots on a meaningful scale, all of this work will amount to something. Even the most skeptical roboticists I’ve spoken to on the subject have pointed to the NASA model, where the race to land humans on the moon led to the invention of products we use on Earth to this day.”

INTERNET

Blazing Bits Transmitted 4.5 Million Times Faster Than Broadband
Michael Franco | New Atlas
“An international research team has sent an astounding amount of data at a nearly incomprehensible speed. It’s the fastest data transmission ever using a single optical fiber and shows just how speedy the process can get using current materials.”

COMPUTING

How We’ll Reach a 1 Trillion Transistor GPU
Mark Liu and HS Philip Wong | IEEE Spectrum
“We forecast that within a decade a multichiplet GPU will have more than 1 trillion transistors. We’ll need to link all these chiplets together in a 3D stack, but fortunately, industry has been able to rapidly scale down the pitch of vertical interconnects, increasing the density of connections. And there is plenty of room for more. We see no reason why the interconnect density can’t grow by an order of magnitude, and even beyond.”

SPACE

Astronomers Watch in Real Time as Epic Supernova Potentially Births a Black Hole
Isaac Schultz | Gizmodo
“‘Calculations of the circumstellar material emitted in the explosion, as well as this material’s density and mass before and after the supernova, create a discrepancy, which makes it very likely that the missing mass ended up in a black hole that was formed in the aftermath of the explosion—something that’s usually very hard to determine,’ said study co-author Ido Irani, a researcher at the Weizmann Institute.”

ARTIFICIAL INTELLIGENCE

Large Language Models’ Emergent Abilities Are a Mirage
Stephen Ornes | Wired
“[In some tasks measured by the BIG-bench project, LLM] performance remained near zero for a while, then performance jumped. Other studies found similar leaps in ability. The authors described this as ‘breakthrough’ behavior; other researchers have likened it to a phase transition in physics, like when liquid water freezes into ice. …[But] a new paper by a trio of researchers at Stanford University posits that the sudden appearance of these abilities is just a consequence of the way researchers measure the LLM’s performance. The abilities, they argue, are neither unpredictable nor sudden.”

Image Credit: AedrianUnsplash

A New Treatment Rejuvenates Aging Immune Systems in Elderly Mice

0

Our immune system is like a well-trained brigade.

Each unit has a unique specialty. Some cells directly kill invading foes; others release protein “markers” to attract immune cell types to a target. Together, they’re a formidable force that fights off biological threats—both pathogens from outside the body and cancer or senescent “zombie” cells from within.

With age, the camaraderie breaks down. Some units flare up, causing chronic inflammation that wreaks havoc in the brain and body. These cells increase the risk of dementia, heart disease, and gradually sap muscles. Other units that battle novel pathogens—such as a new strain of flu—slowly dwindle, making it harder to ward off infections.

All these cells come from a single source: a type of stem cell in bone marrow.

This week, in a study published in Nature, scientists say they restored the balance between the units in aged mice, reverting their immune systems back to a youthful state. Using an antibody, the team targeted a subpopulation of stem cells that eventually develops into the immune cells underlying chronic inflammation. The antibodies latched onto targets and rallied other immune cells to wipe them out.

In elderly mice, the one-shot treatment reinvigorated their immune systems. When challenged with a vaccine, the mice generated a stronger immune response than non-treated peers and readily fought off later viral infections.

Rejuvenating the immune system isn’t just about tackling pathogens. An aged immune system increases the risk of common age-related medical problems, such as dementia, stroke, and heart attacks.

“Eliminating the underlying drivers of aging is central to preventing several age-related diseases,” wrote stem cell scientists Drs. Yasar Arfat Kasu and Robert Signer at the University of California, San Diego, who were not involved in the study. The intervention “could thus have an outsized impact on enhancing immunity, reducing the incidence and severity of chronic inflammatory diseases and preventing blood disorders.”

Stem Cell Succession

All blood cells arise from a single source: hematopoietic stem cells, or blood stem cells, that reside in bone marrow.

Some of these stem cells eventually become “fighter” white blood cells, including killer T cells that—true to their name—directly destroy cancerous cells and infections. Others become B cells that pump out antibodies to tag invaders for elimination. This unit of the immune system is dubbed “adaptive” because it can tackle new intruders the body has never seen.

Still more blood stem cells transform into myriad other immune cell types—including those that literally eat their foes. These cells form the innate immune unit, which is present at birth and the first line of defense throughout our lifetime.

Unlike their adaptive comrades, which more precisely target invaders, the innate unit uses a “burn it all” strategy to fight off infections by increasing local inflammation. It’s a double-edged sword. While useful in youth, with age the unit becomes dominant, causing chronic inflammation that gradually damages the body.

The reason for this can be found in the immune system’s stem cell origins.

Blood stem cells come in multiple types. Some produce both immune units equally; others are biased towards the innate unit. With age, the latter gradually take over, increasing chronic inflammation while lowering protection against new pathogens. This is, in part, why elderly people are advised to get new flu shots, and why they were first in line for vaccination against Covid-19.

The new study describes a practical approach to rebalancing the aged immune system. Using an antibody-based therapy, the scientists directly obliterated the population of stem cells that lead to chronic inflammation.

Blood Bath

Like most cells, blood stem cells have a unique fingerprint—a set of proteins that dot their surfaces. A subset of the cells, dubbed my-HSCs, are more likely to produce cells in the innate immune system, which triggers chronic inflammation with age.

By mining multiple gene expression datasets from blood stem cells, the team found three protein markers they could use to identify and target my-HSCs cells in aged mice. They then engineered an antibody to target the cells for elimination.

Just a week after infusing it into elderly mice, the antibody had reduced the number of myHSC cells in their bone marrow without harming other blood stem cells. A genetic screen confirmed the mice’s immune profile was more like that of young mice.

The one-shot treatment lasted “strikingly” long, wrote Kasu and Signer. A single injection reduced the troublesome stem cells for at least two months—roughly a twelfth of a mouse’s lifespan. With my-HSCs no longer dominant, healthy blood stem cells gained ground inside the bone marrow. For at least four months, the treated mice produced more cells in the adaptive immune unit than their similarly aged peers, while having less overall inflammation.

As an ultimate test, the team challenged elderly mice with a difficult virus. To beat the infection, multiple components of the adaptive immune system had to rev up and work in concert.

Some elderly mice received a vaccine and the antibody treatment. Others only received the vaccine. Those treated with the antibody mounted a larger protective immune response. When given a dose of the virus, their immune systems rapidly recruited adaptive immune cells, and fought off the infection—whereas those receiving only the vaccine struggled.

Restoring Balance

The study shows that not all blood stem cells are alike. Eliminating those that cause inflammation directly changes the biological “age” of the entire immune system, allowing it to better tackle damaging changes in the body and fight off infections.

Like a leaking garbage can, innate immune cells can dump inflammatory molecules into their neighborhood. By cleaning up the source, the antibody could have also changed the environment the cells live in, so they are better able to thrive during aging.

Additionally, the immune system is an “eye in the sky” for monitoring cancer. Reviving immune function could restore the surveillance systems needed to eliminate cancer cells. The antibody treatment here could potentially tag-team with CAR T therapy or classic anti-cancer therapies, such as chemotherapy, as a one-two punch against the disease.

But it isn’t coming to clinics soon. Without unexpected setbacks or regulatory hiccups, the team estimates three to five years before testing in people. As a next step, they’re looking to expand the therapy to tackle other disorders related to a malfunctioning immune system.

Image Credit: Volker Brinkmann

These Plants Could Mine Valuable Metals From the Soil With Their Roots

0

The renewable energy transition will require a huge amount of materials, and there are fears we may soon face shortages of some critical metals. US government researchers think we could rope in plants to mine for these metals with their roots.

Green technologies like solar power and electric vehicles are being adopted at an unprecedented rate, but this is also straining the supply chains that support them. One area of particular concern includes the metals required to build batteries, wind turbines, and other advanced electronics that are powering the energy transition.

We may not be able to sustain projected growth at current rates of production of many of these minerals, such as lithium, cobalt, and nickel. Some of these metals are also sourced from countries whose mining operations raise serious human rights or geopolitical concerns.

To diversify supplies, the government research agency ARPA-E is offering $10 million in funding to explore “phytomining,” in which certain species of plants are used to extract valuable metals from the soil through their roots. The project is focusing on nickel first, a critical battery metal, but in theory, it could be expanded to other minerals.

“In order to accomplish the goals laid out by President Biden to meet our clean energy targets, and support our economy and national security, it’s going to take [an] all-hands-on-deck approach and innovative solutions,” ARPA-E director Evelyn Wang said in a press release.

“By exploring phytomining to extract nickel as the first target critical material, ARPA-E aims to achieve a cost-competitive and low-carbon footprint extraction approach needed to support the energy transition.”

The concept of phytomining has been around for a while and relies on a class of plants known as “hyperaccumulators.” These species can absorb a large amount of metal through their roots and store it in their tissues. Phytomining involves growing these plants in soils with high levels of metals, harvesting and burning the plants, and then extracting the metals from the ash.

The ARPA-E project, known as Plant HYperaccumulators TO MIne Nickel-Enriched Soils (PHYTOMINES), is focusing on nickel because there are already many hyperaccumulators known to absorb the metal. But finding, or creating, species able to economically mine the metal in North America will still be a significant challenge.

One of the primary goals of the project is to optimize the amount of nickel these plants can take in. This could involve breeding or genetically modifying plants to enhance these traits or altering the microbiome of either the plants or the surrounding soil to boost absorption.

The agency also wants to gain a better understanding of the environmental and economic factors that could determine the viability of the approach, such as the impact of soil mineral composition, the land ownership status of promising sites, and the lifetime costs of a phytomining operation.

But while the idea is still at a nebulous stage, there is considerable potential.

“In soil that contains roughly 5 percent nickel—that is pretty contaminated—you’re going to get an ash that’s about 25 to 50 percent nickel after you burn it down,” Dave McNear, a biogeochemist at the University of Kentucky, told Wired.

“In comparison, where you mine it from the ground, from rock, that has about .02 percent nickel. So you are several orders of magnitude greater in enrichment, and it has far less impurities.”

Phytomining would also be much less environmentally damaging than traditional mining, and it could help remediate soil polluted with metals so they can be farmed more conventionally. While the focus is currently on nickel, the approach could be extended to other valuable metals too.

The main challenge will be finding a plant that is suitable for American climates that grows quickly. “The problem has historically been that they’re not often very productive plants,” Patrick Brown, a plant scientist at the University of California, Davis, told Wired. “And the challenge is you have to have high concentrations of nickel and high biomass to achieve a meaningful, economically viable outcome.”

Still, if researchers can square that circle, the approach could be a promising way to boost supplies of the critical minerals needed to support the transition to a greener economy.

Image Credit: Nickel hyperaccumulator Alyssum argenteum / David Stang via Wikimedia Commons

Now We Can See the Magnetic Maelstrom Around Our Galaxy’s Supermassive Black Hole

0

Black holes are known for ferocious gravitational fields. Anything wandering too close, even light, will be swallowed up. But other forces may be at play too.

In 2021, astronomers used the Event Horizon Telescope (EHT) to make a polarized image of the enormous black hole at the center of the galaxy M87. The image showed an organized swirl of magnetic fields threading the matter orbiting the object. M87*, as the black hole is known, is nearly 1,000 times bigger than our own galaxy’s central black hole, Sagittarius A* (Sgr A*) and is dining on the equivalent of a few suns per year. With its comparatively modest size and appetite—Sgr A* is basically fasting at the moment—scientists wondered if our galaxy’s black hole would have strong magnetic fields too.

Now, we know.

In the first polarized image of Sgr A*, released alongside two papers published today (here and here), EHT scientists say the black hole has strong magnetic fields akin to those seen in M87*. The image depicts a fiery whirlpool (the disc of material falling into Sgr A*) circling the drain (the black hole’s shadow) with magnetic field lines woven throughout.

In contrast to unpolarized light, polarized light is oriented in only one direction. Like a pair of quality sunglasses, magnetized regions in space polarize light too. These polarized images of the two black holes therefore map out their magnetic fields.

And surprisingly, they’re similar.

Side-by-side polarized images of supermassive black holes M87* and Sagittarius A*. Image Credit: EHT Collaboration

“With a sample of two black holes—with very different masses and very different host galaxies—it’s important to determine what they agree and disagree on,” Mariafelicia De Laurentis, EHT deputy project scientist and professor at the University of Naples Federico II, said in a press release. “Since both are pointing us toward strong magnetic fields, it suggests that this may be a universal and perhaps fundamental feature of these kinds of systems.”

Making the image was no simple task. Compared to M87*, whose disc is larger and moves relatively slowly, imaging Sgr A* is like trying to photograph a cosmic toddler—its material is always in motion, reaching nearly the speed of light. The scientists had to use new tools in addition to those that yielded the polarized image of M87* and weren’t even sure the image would be possible.

Such technical feats take enormous teams of scientists organized across the globe. The first three pages of each new paper are dedicated to authors and affiliations. In addition, the EHT itself spans the world. Astronomers stitch observations made by eight telescopes into a virtual Earth-sized telescope capable of resolving objects the apparent size of a donut on the moon as viewed from the surface of our planet.

The EHT team plans to make more observations—the next round for Sgr A* begins next month—and add telescopes on Earth and space to increase the quality and breadth of the images. One outstanding question is whether Sgr A* has a jet of material shooting out from its poles like M87* does. The ability to make movies of the black hole later this decade—which should be spectacular—could resolve the mystery.

“We expect strong and ordered magnetic fields to be directly linked to the launching of jets as we observed for M87*,” Sara Issaoun, research co-leader and a fellow at Harvard & Smithsonian’s Center for Astrophysics, told Space.com. “Since Sgr A*, with no observed jet, seems to have a very similar geometry, perhaps there is also a jet lurking in Sgr A* waiting to be observed, which would be super exciting!”

The discovery of a jet, added to strong magnetic fields, would mean these features may be common to supermassive black holes across the spectrum. Learning more about their features and behavior can help scientists piece together a better picture of how galaxies, including the Milky Way, evolve over eons in tandem with the black holes at their hearts.

Image Credit: EHT Collaboration

Human Artificial Chromosomes Could Ferry Tons More DNA Cargo Into Cells

0

The human genetic blueprint is deceptively simple. Our genes are tightly wound into 46 X-shaped structures called chromosomes. Crafted by evolution, they carry DNA and replicate when cells divide, ensuring the stability of our genome over generations.

In 1997, a study torpedoed evolution’s playbook. For the first time, a team created an artificial human chromosome using genetic engineering. When delivered into a human cell in a petri dish, the artificial chromosome behaved much like its natural counterparts. It replicated as cells divided, leading to human cells with 47 chromosomes.

Rest assured, the goal wasn’t to artificially evolve our species. Rather, artificial chromosomes can be used to carry large chunks of human genetic material or gene editing tools into cells. Compared to current delivery systems—virus carriers or nanoparticles—artificial chromosomes can incorporate far more synthetic DNA.

In theory, they could be designed to ferry therapeutic genes into people with genetic disorders or add protective ones against cancer.

Yet despite over two decades of research, the technology has yet to enter the mainstream. One challenge is that the short DNA segments linking up to form the chromosomes stick together once inside cells, making it difficult to predict how the genes will behave.

This month, a new study from the University of Pennsylvania changed the 25-year-old recipe and built a new generation of artificial chromosomes. Compared to their predecessors, the new chromosomes are easier to engineer and use longer DNA segments that don’t clump once inside cells. They’re also a large carrier, which in theory could shuttle genetic material roughly the size of the largest yeast chromosome into human cells.

“Essentially, we did a complete overhaul of the old approach to HAC [human artificial chromosome] design and delivery,” study author Dr. Ben Black said in a press release.

“The work is likely to reinvigorate efforts to engineer artificial chromosomes in both animals and plants,” wrote the University of Georgia’s Dr. R. Kelly Dawe, who was not involved in the study.

Shape of You

Since 1997, artificial genomes have become an established  biotechnology. They’ve been used to rewrite DNA in bacteria, yeast, and plants, resulting in cells that can synthesize life-saving medications or eat plastic. They could also help scientists better understand the functions of the mysterious DNA sequences littered throughout our genome.

The technology also brought about the first synthetic organisms. In late 2023, scientists revealed yeast cells with half their genes replaced by artificial DNA—the team hopes to eventually customize every single chromosome. Earlier this year, another study reworked parts of a plant’s chromosome, further pushing the boundaries of synthetic organisms.

And by tinkering with the structures of chromosomes—for example, chopping off suspected useless regions—we can better understand how they normally function, potentially leading to treatments for diseases.

The goal of building human artificial chromosomes isn’t to engineer synthetic human cells. Rather, the work is meant to advance gene therapy. Current methods for carrying therapeutic genes or gene editing tools into cells rely on viruses or nanoparticles. But these carriers have limited cargo capacity.

If current delivery vehicles are like sailboats, artificial human chromosomes are like cargo ships, with the capacity to carry a far larger and wider range of genes.

The problem? They’re hard to build. Unlike bacteria or yeast chromosomes, which are circular in shape, our chromosomes are like an “X.” At the center of each is a protein hub called the centromere that allows the chromosome to separate and replicate when a cell divides.

In a way, the centromere is like a button that keeps fraying pieces of fabric—the arms of the chromosome—intact. Earlier efforts to build human artificial chromosomes focused on these structures, extracting DNA letters that could express proteins inside human cells to anchor the chromosomes. However, these DNA sequences rapidly grabbed onto themselves like double-sided tape, ending in balls that made it difficult for cells to access the added genes.

One reason could be that the synthetic DNA sequences were too short, making the mini-chromosome components unreliable. The new study tested the idea by engineering a far larger human chromosome assembly than before.

Eight Is the Lucky Number

Rather than an X-shaped chromosome, the team designed their human artificial chromosome as a circle, which is compatible with replication in yeast. The circle packed a hefty 760,000 DNA letter pairs—roughly 1/200 the size of an entire human chromosome.

Inside the circle were genetic instructions to make a sturdier centromere—the “button” that keeps the chromosome structure intact and can make it replicate. Once expressed inside a yeast cell, the button recruited the yeast’s molecular machinery to build a healthy human artificial chromosome.

In its initial circular form in yeast cells, the synthetic human chromosome could then be directly passed into human cells through a process called cell fusion. Scientists removed the “wrappers” around yeast cells with chemical treatments, allowing the cells’ components—including the artificial chromosome—to merge directly into human cells inside petri dishes.

Like benevolent extraterrestrials, the added synthetic chromosomes happily integrated into their human host cells. Rather than clumping into noxious debris, the circles doubled into a figure-eight shape, with the centromere holding the circles together. The artificial chromosomes happily co-existed with native X-shaped ones, without changing their normal functions.

For gene therapy, it’s essential that any added genes remain inside the body even as cells divide. This perk is especially important for fast-dividing cells like cancer, which can rapidly adapt to therapies. If a synthetic chromosome is packed with known cancer-suppressing genes, it could keep cancers and other diseases in check throughout generations of cells.

The artificial human chromosomes passed the test. They recruited proteins from the human host cells to help them spread as the cells divided, thus conserving the artificial genes over generations.

A Revival

Much has changed since the first human artificial chromosomes.

Gene editing tools, such as CRISPR, have made it easier to rewrite our genetic blueprint. Delivery mechanisms that target specific organs or tissues are on the rise. But synthetic chromosomes may be regaining some of the spotlight.

Unlike viral carriers, the most often used delivery vehicle for gene therapies or gene editors, artificial chromosomes can’t tunnel into our genome and disrupt normal gene expression—making them potentially far safer.

The technology has vulnerabilities though. The engineered chromosomes are still often lost when cells divide. Synthetic genes placed near the centromere—the “button” of the chromosome—may also disrupt the artificial chromosome’s ability to replicate and separate when cells divide.

But to Dawe, the study has larger implications than human cells alone. The principles of re-engineering centromeres shown in this study could be used for yeast and potentially be “applicable across kingdoms” of living organisms.

The method could help scientists better model human diseases or produce drugs and vaccines. More broadly, “It may soon be possible to include artificial chromosomes as a part of an expanding toolkit to address global challenges related to health care, livestock, and the production of food and fiber,” he wrote.

Image Credit: Warren Umoh / Unsplash

‘Dark Stars’: Dark Matter May Form Exploding Stars—Finding Them Could Help Reveal What It’s Made Of

0

Dark matter is a ghostly substance that astronomers have failed to detect for decades, yet which we know has an enormous influence on normal matter in the universe, such as stars and galaxies. Through the massive gravitational pull it exerts on galaxies, it spins them up, gives them an extra push along their orbits, or even rips them apart.

Like a cosmic carnival mirror, it also bends the light from distant objects to create distorted or multiple images, a process which is called gravitational lensing.

And recent research suggests it may create even more drama than this, by producing stars that explode.

For all the havoc it plays with galaxies, not much is known about whether dark matter can interact with itself, other than through gravity. If it experiences other forces, they must be very weak, otherwise they would have been measured.

A possible candidate for a dark matter particle, made up of a hypothetical class of weakly interacting massive particles (or WIMPs), has been studied intensely, so far with no observational evidence.

Recently, other types of particles, also weakly interacting but extremely light, have become the focus of attention. These particles, called axions, were first proposed in late 1970s to solve a quantum problem, but they may also fit the bill for dark matter.

Unlike WIMPs, which cannot “stick” together to form small objects, axions can do so. Because they are so light, a huge number of axions would have to account for all the dark matter, which means they would have to be crammed together. But because they are a type of subatomic particle known as a boson, they don’t mind.

In fact, calculations show axions could be packed so closely that they start behaving strangely—collectively acting like a wave—according to the rules of quantum mechanics, the theory which governs the microworld of atoms and particles. This state is called a Bose-Einstein condensate, and it may, unexpectedly, allow axions to form “stars” of their own.

This would happen when the wave moves on its own, forming what physicists call a “soliton,” which is a localized lump of energy that can move without being distorted or dispersed. This is often seen on Earth in vortexes and whirlpools, or the bubble rings that dolphins enjoy underwater.

The new study provides calculations which show that such solitons would end up growing in size, becoming a star, similar in size to, or larger than, a normal star. But finally, they become unstable and explode.

The energy released from one such explosion (dubbed a “bosenova”) would rival that of a supernova (an exploding normal star). Given that dark matter far outweighs the visible matter in the universe, this would surely leave a sign in our observations of the sky. We have yet to find such scars, but the new study gives us something to look for.

An Observational Test

The researchers behind the study say that the surrounding gas, made of normal matter, would absorb this extra energy from the explosion and emit some of it back. Since most of this gas is made of hydrogen, we know this light should be in radio frequencies.

Excitingly, future observations with the Square Kilometer Array radio telescope may be able to pick it up.

Artist's impression of the SKA telescope.
Artist’s impression of the SKA telescope. Image Credit: Wikipedia, CC BY-SA

So, while the fireworks from dark star explosions may be hidden from our view, we might be able to find their aftermath in the visible matter. What’s great about this is that such a discovery would help us work out what dark matter is actually made of—in this case, most likely axions.

What if observations do not detect the predicted signal? That probably won’t rule out this theory completely, as other “axion-like” particles are still possible. A failure of detection may indicate, however, that the masses of these particles are very different, or that they do not couple with radiation as strongly as we thought.

In fact, this has happened before. Originally, it was thought that axions would couple so strongly that they would be able to cool the gas inside stars. But since models of star cooling showed stars were just fine without this mechanism, the axion coupling strength had to be lower than originally assumed.

Of course, there is no guarantee that dark matter is made of axions. WIMPs are still contenders in this race, and there are others too.

Incidentally, some studies suggest that WIMP-like dark matter may also form “dark stars.” In this case, the stars would still be normal (made of hydrogen and helium), with dark matter just powering them.

These WIMP-powered dark stars are predicted to be supermassive and to live only for a short time in the early universe. But they could be observed by the James Webb Space Telescope. A recent study has claimed three such discoveries, although the jury is still out on whether that’s really the case.

Nevertheless, the excitement about axions is growing, and there are many plans to detect them. For example, axions are expected to convert into photons when they pass through a magnetic field, so observations of photons with a certain energy are targeting stars with magnetic fields, such as neutron stars, or even the sun.

On the theoretical front, there are efforts to refine the predictions for what the universe would look like with different types of dark matter. For example, axions may be distinguished from WIMPs by the way they bend the light through gravitational lensing.

With better observations and theory, we are hoping that the mystery of dark matter will soon be unlocked.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ESA/Webb, NASA & CSA, A. Martel

This Week’s Awesome Tech Stories From Around the Web (Through March 23)

ARTIFICIAL INTELLIGENCE

8 Google Employees Invented Modern AI. Here’s the Inside Story
Steven Levy | Wired
“They met by chance, got hooked on an idea, and wrote the ‘Transformers’ paper—the most consequential tech breakthrough in recent history. …Approaching its seventh anniversary, the ‘Attention’ paper has attained legendary status. The authors started with a thriving and improving technology—a variety of AI called neural networks—and made it into something else: a digital system so powerful that its output can feel like the product of an alien intelligence.”

BIOTECH

Surgeons Transplant Pig Kidney Into a Patient, a Medical Milestone
Roni Caryn Rabin | The New York Times
“Surgeons in Boston have transplanted a kidney from a genetically engineered pig into an ailing 62-year-old man, the first procedure of its kind. If successful, the breakthrough offers hope to hundreds of thousands of Americans whose kidneys have failed. …If kidneys from genetically modified animals can be transplanted on a large scale, dialysis ‘will become obsolete,’ said Dr. Leonardo V. Riella, medical director for kidney transplantation at Mass General.”

GENE EDITING

CRISPR Could Disable and Cure HIV, Suggests Promising Lab Experiment
Clare Wilson | New Scientist
“A new way to eradicate HIV from the body could one day be turned into a cure for infection by this virus, although it hasn’t yet been shown to work in people. Several groups are investigating using CRISPR that targets a gene in HIV as a way of disabling dormant virus. Now, Carrillo and her team have shown that, when tested on immune cells in a dish, their CRISPR system could disable all virus, eliminating it from these cells.”

TECH

Microsoft Deal, Apple-Google Talks Show Tech Giants Need AI Help
Dina Bass and Jackie Davalos | Bloomberg
“The moves suggest that despite pouring billions of dollars into partnerships, investments and product development, Microsoft and Google are struggling to figure out how to capitalize on generative artificial intelligence. Neither company is moving fast enough to field consumer products that generate revenue and grab market share, and, despite their size and power, they remain vulnerable to being disrupted.”

SPACE

The US Government Seems Serious About Developing a Lunar Economy
Eric Berger | Ars Technica
“For the first time ever, the United States is getting serious about fostering an economy on the moon. …In recent months, [DARPA] has stepped in to help. In December, DARPA announced that it was working with 14 different companies under LunA-10, including major space players such as Northrop Grumman and SpaceX, as well as non-space firms such as Nokia. These companies are assessing how services such as power and communications could be established on the Moon, and they’re due to provide a final report by June.”

3D PRINTING

Video: Giant Robotic Arm 3D-Prints a Two-Story House
Michael Franco | New Atlas
“A new 3D construction printer from Icon can whip out two-story concrete buildings faster and cheaper than its previous Vulcan printer. It has already been used to build a 27-ft-high structure called Phoenix House, now on display in Austin, Texas.”

TECH

Elon Musk Just Added a Wrinkle to the AI Race
Matteo Wong | The Atlantic
“Yesterday afternoon, Elon Musk fired the latest shot in his feud with OpenAI: His new AI venture, xAI, now allows anyone to download and use the computer code for its flagship software. No fees, no restrictions, just Grok, a large language model that Musk has positioned against OpenAI’s GPT-4, the model powering the most advanced version of ChatGPT.”

SECURITY

Hackers Found a Way to Open Any of 3 Million Hotel Keycard Locks in Seconds
Andy Greenberg | Wired
“At one private event in 2022, a select group of researchers were actually invited to hack a Vegas hotel room, competing in a suite crowded with their laptops and cans of Red Bull to find digital vulnerabilities in every one of the room’s gadgets, from its TV to its bedside VoIP phone. …Now, more than a year and a half later, they’re finally bringing to light the results of that work: a technique they discovered that would allow an intruder to open any of millions of hotel rooms worldwide in seconds, with just two taps.”

ETHICS

OpenAI’s Chatbot Store Is Filling Up With Spam
Kyle Wiggers | TechCrunch
“TechCrunch
found that the GPT Store, OpenAI’s official marketplace for GPTs, is flooded with bizarre, potentially copyright-infringing GPTs that imply a light touch where it concerns OpenAI’s moderation efforts. A cursory search pulls up GPTs that purport to generate art in the style of Disney and Marvel properties, but serve as little more than funnels to third-party paid services, and advertise themselves as being able to bypass AI content detection tools such as Turnitin and Copyleaks.”

Image Credit: Pawel Czerwinski / Unsplash

Researchers Are Building Universal Exoskeletons Anyone Can Use

0

Robotic exoskeletons could help disabled people regain their mobility, factory workers lift heavier loads, or athletes run faster. So far, they’ve been largely restricted to the lab due to the need to painstakingly calibrate them for each user, but a new universal controller could soon change that.

While the word “exoskeleton” might evoke sci-fi images from movies like Alien and Avatar, the technology is edging its way towards the real world. Exoskeletons have been tested as a way to prevent injuries in car factories, let soldiers lug around heavy packs for longer, and even help people with Parkinson’s stay mobile.

But the software controlling how much power to apply in support of a user’s limbs typically has to be carefully tweaked to fit each individual. Also, it normally only helps with a few predetermined movements it’s specially designed for.

A new approach by researchers at the Georgia Institute of Technology uses neural networks to seamlessly adapt an exoskeleton’s movements to each user’s particular posture and gait. The team says this could help get the technology out of the lab and start aiding people in everyday life.

“What’s so cool about this is that it adjusts to each person’s internal dynamics without any tuning or heuristic adjustments, which is a huge difference from a lot of work in the field,” Aaron Young, who led the research, said in a press release.

Exoskeletons use electric motors to provide extra power to a user’s limbs when carrying out strenuous activities. Most control schemes have focused on assisting well-defined activities, such as walking or climbing stairs.

A common approach, the researchers say, is to have a high-level algorithm predict what action the user is trying to take and then, when that activity is detected, initiate a special control scheme designed for that kind of movement.

This means the exoskeleton can only assist specific activities, and even if the device supports several different ones, users often have to toggle between them by pressing a button. What’s more, it means the device needs to be carefully adjusted so its control scheme matches the unique shape and dynamics of each user’s limbs.

The new approach designed by the Georgia Tech team and described in a paper in Science Robotics, instead focuses on what a user’s joints and muscles are doing at any particular point in time and providing powered support to them continuously. Their approach was tested in a hip exoskeleton, which the researchers say is useful in a wide range of scenarios.

A neural network running on a GPU chip reads data from several sensors on the exoskeleton that measure the angle of different joints and the user’s direction and speed. It uses this information to predict what movements the user is making and then directs the exoskeleton’s motors to apply just the right amount of torque to take some of the load off the relevant muscles.

The team trained the neural network on data from 25 participants walking in a variety of contexts while wearing the exoskeleton. This helped the algorithm gain a general understanding of how sensor data related to muscle movements, making it possible to automatically adapt to new users without being tailored to their idiosyncrasies.

The study showed the resulting system could significantly reduce the amount of energy users expended in a variety of activities. While the reduction in energy use was similar to previous approaches, crucially, it was not restricted to particular actions and could provide continuous support no matter what the user was doing.

While the researchers say it’s too early to know if the approach will translate to other kinds of exoskeletons, it seems the overarching idea is relatively adaptable. That suggests exoskeletons could soon become an “off-the-shelf” product assisting people in a wide range of strenuous activities.

Image Credit: Candler Hobbs, Georgia Institute of Technology

Cell Therapy Takes Aim at Deadly Brain Tumors in Two Clinical Trials

0

When my uncle was diagnosed with glioblastoma, I knew he was on borrowed time.

The deadliest form of brain cancer, it rapidly spreads through the brain with limited treatment options. Rounds of chemotherapy temporarily kept the aggressive tumors at bay. But they also wrecked his mind and immune system. He held on for 13 months—longer than the average survival timeline of most patients after diagnosis.

His story is just one of tens of thousands in the US alone. Despite decades spent looking for a therapy, glioblastoma remains a terrible, untreatable foe.

But hope may come from within. This month, two studies genetically engineered the body’s own immune cells to hunt down and wipe out glioblastoma brain tumors.

Therapies using these CAR (chimeric antigen receptor) T cells have been revolutionary in tackling previously untreatable blood cancers, such as leukemia. Since 2017, six CAR T-based therapies have been approved by the US Food and Drug Administration for multiple types of blood cancers. Rather than a last resort, they have now entered the therapeutic mainstream.

But CAR T therapies have always struggled to battle solid tumors. Glioblastomas are an even harder challenge. The cancerous cells form connections with neurons, rewiring neural networks to progressively change how the brain functions and eventually robbing it of cognitive function. This also makes it nearly impossible to surgically remove the tumors without harming the brain.

The new clinical trials offer a glimmer of hope that the therapy could slow the disease down.

One, led by Dr. Bryan Choi at Massachusetts General Hospital, found a single infusion of CAR T cells shrank the tumors in three people with recurrent glioblastoma. Another from the University of Pennsylvania Perelman School of Medicine used a different CAR T formulation to similarly reduce the size of brain tumors in six participants.

Though promising, the treatment wasn’t a cure. The tumors reoccurred in several people after six months. However, one man remained cancer-free beyond that point.

To be clear, these are only interim results from a small handful of participants. Both studies are still actively recruiting to further assess their results.

But to Choi, it’s a step toward expanding CAR T therapies beyond blood cancers. “It lends credence to the potential power of CAR T cells to make a difference in solid tumors, especially the brain,” he told Nature.

Power of Two

Cancer cells are sneaky. Our body’s immune system is constantly scouting for them, but the cells rapidly mutate to escape surveillance.

T cells are one of the main immune cell types keeping an eye out for cancer. In the past decade, scientists have given them an artificial boost with genetic engineering. These gene-edited T cells, used in CAR T therapies, can better hunt down cancerous blood cells.

Here’s how it usually works.

Physicians isolate a person’s T cells and genetically add extra protein “hooks” on their surfaces to help them better locate cancer cells. Like all cells, cancerous ones have many protein “beacons” dotted along their exteriors, some specific to each cancer. In CAR T therapy the new hooks are designed to easily grab onto those proteins, or antigens. After re-infusing the boosted cells back into the body, they can now more effectively seek and destroy cancerous cells.

While the strategy has been game-changing for blood cancers, it has faltered for solid tumors—such as those that grow in organs like the breasts, lungs, or brain. One challenge is finding the right antigens. Unlike leukemia, solid tumors are often made up of a mix of cells, each with a different antigen fingerprint. Reprogramming T cells to target just one antigen often means they miss other cancerous cells, lowering the efficacy of the treatment.

“The challenge with GBM [glioblastoma] and other solid tumors is tumor heterogeneity, meaning not all cells within a GBM tumor are the same or have the same antigen that a CAR T cell is engineered to attack,” Dr. Stephen Bagley, who led the University of Pennsylvania clinical trial, said in a press release. “Every person’s GBM is unique to them, so a treatment that works for one patient might not be as effective for another.”

So, why not add an extra “hook” to CAR T cells?

Tag-Team Triumph

Both of the new studies used the dual-target method.

Choi’s team zeroed in on a protein called epidermal growth factor receptor (EGFR). The protein is essential to the developing brain but can lead to glioblastoma in its normal and mutated forms. The problem is the protein also occurs in other healthy tissues, such as the skin, lungs, and gut. As a workaround, the team added an “engager” protein to tether T cells to their target.

In three participants, a single infusion directly into the brain decreased the size of their tumors in a few days. The effects were “dramatic and rapid,” wrote the team. The cancer came back in two people. But in one person, a 72-year-old man, the treatment slashed his brain tumor by over 60 percent and lasted more than six months.

The Penn Medicine team also targeted EGFR. In addition, their CAR T cell recipe grabbed onto another protein that’s estimated to mark over 75 percent of glioblastomas. In the 48 hours after a direct infusion into the brain, the tumors shrank in all six participants, with the effects lasting at least two months in some. Aged 33 to 71, each person had at least one relapse of tumor growth before starting the treatment.

“We are energized by these results, and are eager to continue our trial, which will give us a better understanding of how this dual-target CAR T cell therapy affects a wider range of individuals with recurrent GBM [glioblastoma],” lead study author Dr. Donald O’Rourke said in the press release.

The treatment did have side effects. Even at a lower dose, it damaged neurons, a complication that had to be managed with a heavy dose of other medications.

Unlike previous CAR T therapies, which are infused into the bloodstream, both studies require direct injection into the brain. While potentially more effective because the engineered cells have direct contact with their target, brain surgery is never ideal.

Both teams are now dialing in their formulations to reduce side effects and make the therapies last longer. The Penn Medicine team will also map the CAR T cells’ infiltration of brain tumors over time. The dual targeting method could make it more difficult for cancer cells to evolve resistance to the therapy. By better understanding these interactions, it’s possible researchers can build better CAR T formulations for glioblastoma and other solid tumors.

It’s not a home run. But for deadly brain tumors, the studies offer a ray of hope.

Image Credit: NIAID