Home Blog

The 6 Ds of Tech Disruption: A Guide to the Digital Economy

“The Six Ds are a chain reaction of technological progression, a road map of rapid development that always leads to enormous upheaval and opportunity.”

–Peter Diamandis and Steven Kotler, Bold

We live in incredible times. News travels the globe in an instant. Music, movies, games, communication, and knowledge are ever-available on always-connected devices. From biotechnology to artificial intelligence, powerful technologies that were once only available to huge organizations and governments are becoming more accessible and affordable thanks to digitization.

The potential for entrepreneurs to disrupt industries and corporate behemoths to unexpectedly go extinct has never been greater.

One hundred or fifty or even twenty years ago, disruption meant coming up with a product or service people needed but didn’t have yet, then finding a way to produce it with higher quality and lower costs than your competitors. This entailed hiring hundreds or thousands of employees, having a large physical space to put them in, and waiting years or even decades for hard work to pay off and products to come to fruition.

“Technology is disrupting traditional industrial processes, and they’re never going back.”

But thanks to digital technologies developing at exponential rates of change, the landscape of 21st-century business has taken on a dramatically different look and feel.

The structure of organizations is changing. Instead of thousands of employees and large physical plants, modern start-ups are small organizations focused on information technologies. They dematerialize what was once physical and create new products and revenue streams in months, sometimes weeks.

It no longer takes a huge corporation to have a huge impact.

Technology is disrupting traditional industrial processes, and they’re never going back. This disruption is filled with opportunity for forward-thinking entrepreneurs.

The secret to positively impacting the lives of millions of people is understanding and internalizing the growth cycle of digital technologies. This growth cycle takes place in six key steps, which Peter Diamandis calls the Six Ds of Exponentials: digitization, deception, disruption, demonetization, dematerialization, and democratization.

According to Diamandis, cofounder and executive chairman of Singularity University and founder and executive chairman of XPRIZE, when something is digitized it begins to behave like an information technology.

6ds-infographic-v2-2

Newly digitized products develop at an exponential pace instead of a linear one, fooling onlookers at first before going on to disrupt companies and whole industries. Before you know it, something that was once expensive and physical is an app that costs a buck.

Newspapers and CDs are two obvious recent examples. The entertainment and media industries are still dealing with the aftermath of digitization as they attempt to transform and update old practices tailored to a bygone era. But it won’t end with digital media. As more of the economy is digitized—from medicine to manufacturing—industries will hop on an exponential curve and be similarly disrupted.

Diamandis’s 6Ds are critical to understanding and planning for this disruption.

Diamandis uses the contrasting fates of Kodak and Instagram to illustrate the power of the six Ds and exponential thinking.

Kodak invented the digital camera in 1975, but didn’t invest heavily in the new technology, instead sticking with what had always worked: traditional cameras and film. In 1996, Kodak had a $28 billion market capitalization with 95,000 employees.

But the company didn’t pay enough attention to how digitization of their core business was changing it; people were no longer taking pictures in the same way and for the same reasons as before.

After a downward spiral, Kodak went bankrupt in 2012. That same year, Facebook acquired Instagram, a digital photo sharing app, which at the time was a startup with 13 employees. The acquisition’s price tag? $1 billion. And Instagram had been founded only 18 months earlier.

The most ironic piece of this story is that Kodak invented the digital camera; they took the first step toward overhauling the photography industry and ushering it into the modern age, but they were unwilling to disrupt their existing business by taking a risk in what was then uncharted territory. So others did it instead.

The same can happen with any technology that’s just getting off the ground. It’s easy to stop pursuing it in the early part of the exponential curve, when development appears to be moving slowly. But failing to follow through only gives someone else the chance to do it instead. 

The Six Ds are a road map showing what can happen when an exponential technology is born. Not every phase is easy, but the results give even small teams the power to change the world in a faster and more impactful way than traditional business ever could.

Image Credit: Shutterstock

This Is How to Invent Radical Solutions to Huge Problems

If you force a grasshopper into a jar and fasten the lid, the grasshopper eventually shortens its jump after hitting the lid enough times. 

moonshot-grasshopper-v3After a while, even if you take the lid off the jar, the grasshopper will stay put—it’s forgotten how high it can jump.

We’re a lot like grasshoppers in this way.

When we’re kids we believe we can be and achieve anything. But then, slowly, our big thinking starts disappearing, and before we know it, we’re playing it safe and setting goals we already know are achievable.

The problem with playing it safe, though, is it never results in a breakthrough.

This is why great leaders and organizations set moonshots—or wildly ambitious goals. These goals are at the heart of moonshot thinking, a unique and powerful approach to big thinking and problem solving.

Moonshot thinking is how we can take the lid off our own ideas, and according to some of our favorite innovators, there’s a blueprint for how to shift your mindset.

perspective_astroteller

The Original Moonshot

Maybe you’re thinking, “Moonshot thinking is just a Silicon Valley buzzword, like innovation.” But the term has deep roots. Let’s look back at its powerful origin.

In 1962 President John F. Kennedy delivered a speech at Rice University where he spoke the famous words, “We choose to go to the moon in this decade.”

These words planted the seeds that ultimately changed the course of humanity.

JFK’s original moonshot was heroic and grandiose—it took America to the freaking moon—but he didn’t set this goal (or moonshot) knowing how we’d achieve it or promising it would be easy. He said we were going to achieve something incredible, set the timeframe, and inspired action.

The rest is history. This is the power  of moonshot thinking in action.

Now, over fifty years since Kennedy sent the country on this original moonshot, Google’s Astro Teller has made moonshot thinking his life purpose.

Teller is Google’s “Captain of Moonshots” and the director of X (formerly Google X). X is a moonshot R&D factory where they test and launch projects that use breakthrough technologies to build solutions that can radically improve the world.

moonshotfactory_astro-in-article

Moonshot Thinking 101

Moonshot thinking is when you pick a huge problem, like climate change, and set out to create a radical solution to the problem. To make this happen you have to abandon the idea of creating a 10% improvement. Instead, the focus is a solution that will bring tenfold (or 10x) improvements, or solve it altogether.

The Difference of Thinking Big: 10x vs. 10%

Focusing on 10x improvements (in areas like cost, speed, performance, design, etc.) triggers a series of behavioral changes that are key to making a moonshot a reality.

Aiming for 10x causes a radical re-framing of the problem at hand. When teams approach a problem believing they can solve it, not just improve it, it uncaps individual and collective thinking.

perspective2_inarticle_astro

10x thinking forces organizations to constantly prioritize innovative behavior, which is critical because innovation can’t just be whipped out when it’s convenient.

Shooting for 10x frees teams to throw out the rulebook when needed. Moonshots often can’t be built atop the current assumptions, tools, and infrastructures that got you to the problem in the first place.

The Moonshot Blueprint

Teller has outlined the three intersecting factors used at X to form a moonshot. You can follow this blueprint to form your own.

moonshot-blueprint-v2

1. Huge Problem: Pick a massive problem that, if solved, would positively impact the lives of millions, even billions.

2. Radical Solution: Create and propose a radical new solution to that problem. It’s okay if the solution seems crazy today. Teller says it can sound “almost like science fiction.”

3. Breakthrough Technology: Search for breakthroughs and emerging technologies that exist today—like machine learning, 3D printing, and robotics—and leverage those technologies in your solution. This provides evidence that the solution (though wild-sounding today) may be possible in the future. 

Moonshots of Today

All around us are breakthroughs that once started as moonshots. These breakthroughs often become so embedded in our lives that we fail to notice their presence!

Think about the screen you’re reading this on, the refrigerator in your kitchen, the planes you fly on, the vaccines and medicines that keep you healthy. Once these were just moonshot ideas.

Here are examples of incredible moonshots that are in the works today:

  • SpaceX
    Moonshot: Make humans a multiplanetary species. SpaceX’s Falcon Heavy will be the world’s most powerful functioning rocket when it lifts off later this year.
  • Google’s self-driving car
    Moonshot: Make an autonomously-driven vehicle. Google’s self-driving car prototype is electronically powered and equipped with the sensors and software to navigate and operate the car. Google self-driving cars have already driven 2 million miles on public roads.
  • Made In Space
    Moonshot: Make everything in space be made in space. Made In Space put the first 3D printer on the International Space Station in 2014. Now, they’re building Archinaut, an autonomous manufacturing platform that can build and assemble large-scale structures in orbit (picture antennas in space that are larger than a football stadium, providing internet to everyone on earth).

How to Fuel Your Moonshot

Here are two expert tips from Teller on how to keep moonshots breathing in your organization (or kill them when the time comes).

Tip #1: Context matters…a lot

Think about it like this: you can’t expect teams to think big or act boldly if it’s clear your organization always kills big ideas and favors the teams who play it safe.

Teams must feel a sense of freedom and safety to experiment; they need to know it’s safe to fail—because they’re going to. Having zero failures is often an indicator the thinking isn’t big enough. This is why Teller rewards teams when they fail.

This culture must be embedded throughout an organization because big thinking and innovation are like muscles. If teams don’t exercise the muscles, they weaken.

innovation_atroteller_inarticle

Teller talks about this as having “moonshots all the way down,” so that moonshot thinking even permeates how teams collaborate and form processes.

Tip #2: Gather insights fast

At X, teams have a lot of room for experimentation and risk-taking, but they also have to put their ideas to the test early on.

It’s a critical part of moonshot thinking. Failing becomes a lot more dangerous if you dump tons of time and money into an untested idea. Not gaining quick insights can also cause people to become overly attached to a specific idea or solution.

Teams can test ideas by making lean (fast and cheap) prototypes up front.

By repeatedly testing the prototype, drawing insights from the test, and then making iterations based on the insights (a process called rapid prototyping), the idea runs through rapid learning cycles with tight feedback loops.
moonshot-balance-v2

Gathering data points for an idea early on is also important because it’s how teams can quickly uncover problems and ensure that they’re moving in the right direction.

Tight feedback loops and rapid learning also make it easier to know when an idea needs to be shut down. To do this, Teller recommends balancing “unchecked optimism” with “enthusiastic skepticism,” which helps you stay open to ideas while also feeling comfortable scrutinizing or killing projects that aren’t making the cut.

Teller invites teams to cheerfully ask each other, “How are we going to try to kill our project today?”

Takeaways—Boldly Go!

Think back to JFK’s original moonshot. It all started with an audacious, seemingly impossible goal. Moonshot thinking challenges us to question what’s possible and to take on huge goals.

In what ways are you capping your own thinking or your teams’? Are you taking on a big enough problem in the world?

Breakthroughs in science, technology, and industry happen when we push into the unknown and explore distant frontiers.

It’s one reason Star Trek’s famous motto, “Boldly go where no one has gone before,” speaks to the hearts and mind of millions.

So boldly go, and dare to orient towards a big problem, because you never know what’s waiting to be discovered.

“Somewhere, something incredible is waiting to be known.”
– Carl Sagan


Banner image credit: Shutterstock

The Motivating Power of a Massive Transformative Purpose

Eradicating diseases, mastering flight, near-instant global communication, going to the moon—humans have developed a taste for making the impossible possible.

Though we still face a daunting list of global challenges, we’ve learned that science and technology can uncover big solutions. But mind-blowing breakthroughs don’t just happen. They take teams of bright and dedicated people chipping away at the problem day and night. They take a huge amount of motivation, toil, and at least a few failures.

To solve our biggest problems, we need people to undertake big tasks. But what drives someone to take on such a difficult, uncertain process and stick with it?

There’s a secret to motivating individuals and teams to do great things: It’s purpose.

Social movements, rapidly growing organizations, and remarkable breakthroughs in science and technology have something in common—they’re often byproducts of a deeply unifying purpose. There’s a name for this breed of motivation.

It’s called massive transformative purpose or MTP.

Setting out to solve big problems brings purpose and meaning to work—it gives us a compelling reason to get out of bed in the morning and face another day.

Peter Diamandis likes to say, “Find something you would die for, and live for it.”

The more we organize around massive transformative purpose, the harder we’ll work, the more dedicated we’ll be, the faster we can solve big problems—and maybe most importantly, the more fulfilled we’ll feel about the work we do.

This article will explore ideas we’ve learned from some of our favorite big thinkers on what makes an MTP and how to find and implement yours.

Understanding Massive Transformative Purpose (MTP)

In 2014, Salim Ismail published Exponential Organizations, co-authored by Mike Malone and Yuri van Geest. In the book, the team analyzed the 100 fastest growing organizations and synthesized their key traits. They discovered every single company on the list had a massive transformative purpose.

In the simplest sense, an MTP is a “highly aspirational tagline” for an individual or group, like a company, organization, community, or social movement.

It’s a huge and audacious purpose statement.

mtp-definition-box-v3

Elon Musk and SpaceX are a good example for understanding MTPs. Musk didn’t found SpaceX to have a luxurious retirement on Mars or just for the sake of building the most profitable aerospace company. He’s driven by the belief humans must become a multi-planetary species. Making this a reality is his purpose.

SpaceX’s MTP to revolutionize space technology and enable people to live on another planet creates a shared aspirational purpose within the organization.

Notice that SpaceX’s MTP is:

  • Huge and aspirational
  • Clearly focused
  • Unique to the company
  • Aimed at radical transformation
  • Forward-looking

MTPs are not representative of what’s possible today; they’re aspirational and focused on creating a different future. This aspirational element is what ignites passion in individuals and groups; it’s what engages people’s hearts and minds to work together to realize their goal.

artboard-1-copy-16

SpaceX’s MTP does this so well that they’ve also activated a cultural shift outside of the company’s walls, which is a secondary effect of having a strong MTP.

Other examples Ismail, Malone, and van Geest note in their book include the massive lines that form when Apple releases a new iPhone or the huge waitlist each year to get a seat at TED’s annual conference.

MTPs can inspire whole communities and evangelists to form around them.

Four examples of strong massive transformative purposes

As you read through these examples try to identify how each one fulfills each letter of MTP.

  1. TED: “Ideas worth spreading.”
  2. Google: “Organize the world’s information.”
  3. X Prize Foundation: “Bring about radical breakthroughs for the benefit of humanity.”
  4. Tesla: “Accelerate the transition to sustainable transportation.”

Hopefully, this helps explain what an MTP is. But there are other kinds of motivating messages out there. What distinguishes an MTP from all the rest?

An MTP is not: 

  • Just a company’s mission statement.
  • Technology specific or narrowly focused.
  • Representative of what is possible today.
  • Motivated only by profits.
  • Just a big goal or even a “big hairy audacious goal.” (It must also be driven by a purpose to create transformative impact.)

A successful MTP can often be reframed into a question. That question can then be used to evaluate organizational decisions and whether they’re aligned with the MTP. For example, if the organization TED is deciding whether to move forward with a talk they can ask, “Is this an idea worth spreading?”

The competitive advantages of an MTP

Having an MTP can trigger incredible outcomes, which is why high-growth organizations all tend to have them.

The aspirational quality of an MTP pushes teams to prioritize big thinking, rapid growth strategies, and organizational agility—and these behaviors all have substantial payoffs in the long term.

As an MTP harnesses passion within an organization, it also galvanizes a community to form outside the company that shares the purpose. This sparks an incredible secondary impact by helping organizations attract and retain top qualified talent who want to find mission-driven work and remain motivated by the cause.

Additionally, when people are aligned on purpose, it creates a positive feedback loop by channeling intrinsic motivation towards that shared purpose.

Finally, like a north star, an MTP keeps all efforts focused and aligned, which helps organizations grow cohesively. As the organization evolves and scales, the MTP becomes a stabilizer for employees as they transition into new territory.

How to begin creating an MTP

Peter Diamandis boils down two main areas of focus to identify your purpose:

  1. Identify the who: Ask yourself who you want to impact. What community do you want to create a lasting positive impact for? Is it high school students? The elderly? People suffering a chronic disease? These are just a few examples of potential groups to focus your purpose towards.
  1. Identify the what: What problem do you want to take on and solve? Here’s an exercise created by Diamandis to identify the “what” of your purpose:

Step one: Write down the top three items you are most excited about or get you most riled up (that you want to solve). 

Step two: For each of the three problems listed above, ask the following six questions and score each from 1-10.
(1 = small difference; 10 = big difference)

ASSESSMENT QUESTIONS
1. If at the end of your life you had made a significant dent in this area, how proud would you feel?
2. Given the resources you have today, what level of impact could you make in the next three years if you solved this problem?
3. Given the resources you expect to have in 10 years, what level of impact could you make in a 3-year period?
4. How well do I understand the problem?
5. How emotionally charged (excited or riled up) am I about this?
6. Will this problem get solved with or without you involved?

TOTAL = Add up your scores and identify the idea with the highest score. This is your winner for now. Does this one intuitively feel right to you?

Have an MTP? Here’s what to do next

Realizing an MTP requires a different type of thinking. It requires a mindset and work environment that leans into complex problems and dares to think big—really big.

SpaceX isn’t where they are today because they focused on making 10% improvements to existing aerospace technology. And Google’s self-driving car isn’t the byproduct of a goal to make a 10% improvement to driving.

10% thinking leads to incremental progress, which doesn’t lead to making the impossible possible—like sending people to the moon.

Through history, however, we’ve learned that radically big thinking can lead to these types of breakthroughs.

You have the recipe for creating a massive transformative purpose to push you and your organization to the next level of performance and impact.

Now, it’s time to get to work.

Download a checklist for writing your own MTP, and share your ideas with us @singularityhub

Image credit: Shutterstock

Mice Born From Artificial Eggs a ‘Stunning Achievement’

0

Last month, a team of British scientists successfully made healthy, fertile mice from pseudo-egg cells that resembled fertilized embryos. The story made waves: compared to normal egg cells, the pseudo-eggs were more similar to non-sex cells such as skin cells. The implications were tantalizing: one day, in the far future, we may be able to make “motherless” babies without the need for eggs.

Welcome to the future.

This week, a team from Kyushu University in Fukuoka, Japan successfully used skin cells to make fully functional mouse egg cells completely in a dish. When artificially inseminated and brought to term in surrogate mothers, the artificial eggs developed into healthy baby mice that lived normal lives, eventually giving birth to pups of their own. The study was published in the prestigious academic journal Nature.

“This is a very exciting study, to be able to make robust and functional mouse oocytes (egg cells) over and over again entirely in a dish,” wrote Dr. Jacob Hanna, a reproductive scientist at the Weizmann Institute of Science in Revohot, Israel, who was not involved in the study, in an email to Singularity Hub.

Being able to culture eggs in a dish is a holy grail in biology, explains Hanna. The system lets scientists dig into the biology of fertility, which may help us uncover crucial genes and molecular events that help eggs develop normally.

Although the method’s success rate was about 3.5%, experts in the field are calling the study a “stunning achievement” that could potentially “eradicate infertility” if it can be applied in humans. The ability to make artificial eggs from any cell in the body could allow women who lack viable eggs or male-male couples to have genetic children of their own. “Reproductive age” may become obsolete.

“People might have thought this was science fiction, but it does work,” says Dr. Azim Surani, a stem cell biologist at the Gurdon Institute in Cambridge, UK, who had previously teamed up with study lead author, Dr. Katsuhiko Hayashi.

Hayashi agrees. Looking ahead, many years down the line, there is a possibility that we can produce human eggs from stem cells, he said in an interview with Nature.

But ethical implications need to be resolved before the method can be widely adopted, Hayashi warns. For example, it would be quite possible to introduce genetic mutations into the artificial eggs in culture. These “germ-line” mutations are currently banned since they can be passed down generations. In other words, the prospect of designer babies has never been closer.

Cooking Up Eggs In a Dish

Hayashi is not a newcomer to making artificial egg cells. Back in 2012, his team made headlines by successfully transforming embryonic stem cells and iPSCs into immature eggs in a cell culture system. (iPSCs, or induced pluripotent stem cells, are reprogrammed mature cells that theoretically have the ability to develop into any kind of cell type and tissue.)

But the method was incomplete. The immature cells had to be transplanted back into the ovaries of female mice to complete the maturation process. In a dish, the lab-grown precursor eggs withered and died. Why this happened was unclear, but the team speculated that something in an egg’s normal environment contributed to its development.

The new work closes that last leg of the maturation gap. Starting with either embryonic stem cells or iPSCs generated from female skin cells, the team first coaxed the cells to become precursor egg cells by forcing them to express a handful of genes. Then they mixed the precursors with clusters of mature non-egg cells taken from mice ovaries, in essence reconstituting an entire ovary in a culture dish.

Starting with either embryonic stem cells or iPSCs generated from female skin cells, the team first coaxed the cells to become precursor egg cells by forcing them to express a handful of genes. Then they mixed the precursors with clusters of mature non-egg cells taken from mice ovaries, in essence reconstituting an entire ovary in a culture dish.

After three weeks of careful culturing, the precursor egg cells began expressing genes that resembled those of a more mature egg. The researchers then carefully added a cocktail of hormones and other drugs — voila, another two weeks, and the delicate immature eggs had fully grown into mature egg cells.

This suggests we can sidestep implantation by supplying immature egg cells with supporting cells from the ovary in culture, explains Hanna. This makes the method much more possible, and easier, to do in humans since it’s less invasive.

In all, the team made over 50 lab-grown ovaries that produced over three thousand egg cells. Only a third made it all the way to full maturation, as many others contained mutations that prohibited their development.

Over 400 genes were expressed differently between the lab-grown eggs and eggs that develop naturally inside the body, the authors noted. The artificial eggs also had higher rates of chromosome abnormalities — that is, their DNA was not packaged correctly in the right numbers.

Nevertheless, about 3% of fertilized artificial eggs did develop into normal mouse offspring. In the last step, the scientists took their stem cells or skin cells and redid the process all over again, thus recreating the full cycle of life of an egg completely in a dish.

Motherless babies

Hayashi carefully notes that for now, female mice remain part of the equation.

We need to take supporting cells from their ovaries for the culture system, and in this study, we used fetal tissue, he explained. If the system were to be directly moved into humans, we would have to use tissue from aborted fetal embryos to get those supporting cells. That’s still an ethical grey area.

The next step is to try to make those supporting cells also from stem cells. “If we could establish such a culture system, that would be very useful for a human system,” says Hayashi.

The team is cautiously optimistic. According to Hayashi, his team plans to repeat the process in non-human primates first, which could potentially take almost a decade. As of now, the system is far too rudimentary for human use.

“We cannot exclude a risk of having a baby with a serious disease,” he says.

However, if the team irons out current issues of the system and manages to recreate entire ovaries in lab from skin cells, we will have an extremely powerful tool for all kinds of fertility conundrums.

Women with genetic fertility issues or who are less fertile due to age or disease could bear children of their own using lab-grown egg cells carrying their DNA. Theoretically, the system could also allow gay couples to have genetic children developed from one partner’s skin cells, fertilized with another’s sperm.

That said, making eggs from male skin cells is a lot more difficult. In Hayashi’s experiments, eggs produced from the tails of male mice died during the first few rounds of cell division.

It’s likely this is because male cells carry a Y chromosome that needs to be removed, explains Hana. But this is already possible, he adds.

There’s no doubt that there are still many hurdles to overcome, and we’re far more willing to take risks in mice than when it comes to our own children — making the leap from a 3.5% success rate to a nearly perfect one will take time.

But the study is a game-changer.

“Sometimes when you know something is possible, it takes off the mental barriers you might have. You start being more optimistic,” says Surani. “I think it is possible.”


Image Credit: Shutterstock

Introducing the New Singularity University

0

To reflect everything we have become since our founding in 2008, and more importantly, to showcase and accelerate our bold vision for our future, Singularity University has undergone an extensive company-wide rebrand.

We’ve updated our website and moved to a new domain: su.org.

We’ve also unveiled a new logo and brand narrative.

What you see here is the result of a comprehensive 18-month process. We spoke with alumni, faculty, investors, community members, partners, prospects, employees, and other stakeholders. And through this process we were reminded that our community and impact are the heart and soul of Singularity University.

We’re proud to now have alumni in over 100 countries and chapters in 50 cities.

2016 Global Solutions Program Opening Ceremony

As we created our new brand, we wanted to emphasize what is unique about Singularity University: the transformative experience our participants report, the people who make up our community, and our shared mission of impact. Our new tagline expresses our invitation to join us on our mission and underscores our belief that everyone has the potential to create exponential impact:

Be Exponential.

Our new logo is bold and vibrant like our community and was designed with our shared mission and community at the core:

The mark represents all of the necessary ingredients for impact coming together in an exponential mindset and going back out into the world to create global impact and, in turn, a more abundant future.

Over the next 30 years, humanity will encounter some of the greatest transitions any generation has ever had to face. Technological disruption is reshaping every part of our lives… every business, every industry, every society, even what it means to be ‘human.’ We in the SU community know that exponential technology can be used to solve humanity’s biggest challenges.

Our refreshed brand signals our intent to better address our community’s needs and ambitions, and to scale our impact and grow.

SU Labs startup X2AI tests their mental health AI “Tess” with Syrian refugees in Lebanon.
SU Labs startup X2AI tests their mental health AI “Tess” with Syrian refugees in Lebanon.

Singularity University is where the necessary ingredients for exponential impact come together as one. You are — or could be — one of those critical ingredients. Your journey is our journey — and we are here to help you and your initiatives be exponential.

We are very grateful to those of you who have been on the journey with us.

Without the Singularity University community of doers, zero-gravity 3D printing, drones for disaster relief, a simple blood test to help cure cancer by catching it early, artificial intelligence providing equal access to mental healthcare, and phones that help the visually impaired see, might not exist.

We look forward to going on this journey with you.

This is only the beginning.

Be Exponential.

Taking the Pulse of Medtech With the Exponential Medicine MEDy Awards

6

While incredible technologies are being developed to treat various diseases, the wisest startups seem to focus on preventative measures, anticipating a world—and marketplace—where diseases are minimized or avoided entirely.

Singularity University’s annual Exponential Medicine conference highlights the future of medical technology, and its annual MEDy Awards—that’s Medical Entrepreneurship and Disruption—help gauge the pulse of medtech startups.

At this year’s MEDy Awards, startups focused on preventative care via wearables, apps, and data sets. The common thread: rather than treating disease only after it has advanced to the point of being discoverable, let’s create systems to prevent disease in the first place.

The Best Pitch MEDy was won by Elemeno Health. Billed as “a mobile solution for your frontline healthcare team,” Elemeno works to align internal hospital staff on consistency, quality, and safety with user-friendly interfaces and gamified checklists. In other words, it’s exactly the type of software you don’t often see in healthcare.

The Convergence MEDy winner was Upright Technologies, whose founder noted that sitting while slouching is the number one cause of back pain, and back pain is the number one cause of disability in the United States. Upright Technologies has already sold over 10,000 small gizmos that cling to your person throughout the day and simply buzz to let you know when you’re slouching. A corresponding app visualizes your tendencies for bad posture and reinforces better sitting behavior.

The Most Disruptive MEDy went to Pison Technology, whose wearable sensors allow those without full control of their limbs to interface with computers. Focusing initially on patients with ALS, Pison allows users to control a regular computer desktop using unobtrusive devices strapped to upper arm muscles. By reading muscle signals, the user can control a cursor on the screen, and a feedback system will eventually give Pison the ability to monitor neuromuscular conditions and corresponding muscle signals in real time.

Lastly, EmojiHealth won the One to Watch MEDy. Seventeen-year-olds Alexandra Reeves and Anna Melnyk created a Facebook Messenger chatbot to regularly and casually check in to see how users are feeling and ask if they’ve been keeping up on regular health habits, such as taking prescribed medication.

The founders of EmojiHealth at the Exponential Medicine 2016 MEDy Awards.
The founders of EmojiHealth at the Exponential Medicine 2016 MEDy Awards.

Beyond Startups

Startups, organizations, and governments are all identifying the needs for consumer-oriented medical technology. This year, Singularity University launched their second California Impact Challenge in partnership with the California Governor’s Office focused specifically on precision medicine, which is to say medical technology that uses a precise, person-centered approach for diagnosis and treatment.

The winner of the challenge, Kanteron Systems, built a software platform that focuses on the patient as an individual. Their digital healthcare ecosystem integrates medical imaging with genomic, pharmacogenomic, and biosensor data to improve both diagnoses and treatment plans.

The Future of Healthcare Companies

As Peter Diamandis has written, healthcare is a fundamentally broken industry, and that’s probably why we’re seeing the most intriguing and fastest-growing startups focusing on consumers and niche diseases rather than appealing to big pharma and existing hospital systems.

An entrepreneur isn’t anxious to be the next Bayer when they can empower consumers directly as the next 23andMe or Fitbit.

We often cite “the democratization of healthcare” and “the quantified self” as the tropes indicating where medtech is headed. And if today is about software and wearables, tomorrow is about the big data and insights generated from all of those platforms.

See the rest of our Exponential Medicine coverage here.

The Future of Surgery Is Robotic, Data-Driven, and Artificially Intelligent 

As far back as 3,500 years ago ancient Egyptian doctors were performing invasive surgeries. Even though our tools and knowledge have improved drastically over time, until very recently surgery was still a manual task for human hands.

When it came out about 15 years ago, Intuitive Surgical’s da Vinci surgical robot was a major innovation. The da Vinci robot helps surgeons be more precise and dexterous and to remove natural hand tremors during surgery.

In the years since da Vinci first came out, many other surgical robots have arrived. And today there’s a new generation coming online, like the Verb robot, a joint venture between Google and Johnson and Johnson. This means surgery is about to get even more interesting. Surgical robotics will be able to do more than just improve dexterity and reduce incision size…

“We’re on the verge of what we might call the second wave in surgical robotics,” said Catherine Mohr, vice president of strategy at Intuitive Surgical, while speaking at Singularity University’s Exponential Medicine conference this week.

c14_catherinemohr_xmed16-3Mohr believes this new wave of innovation will be characterized by the convergence of surgical robotics with AI and data gathered from robotic systems.

Surgery is about to get “digitized.” We’ll start collecting and analyzing data passing through these robotic systems, like motion tracking. “Once we can turn something into data, then we can start making exponential changes,” Mohr said.

A problem “calling out for robotics”

China is currently on track to have a million lung cancer deaths a year. Lung cancer is surgically treatable, but only if it is found fast enough. And too often we’re not finding it fast enough. Mohr says the problem of lung cancer detection is just calling out for robotics.

Currently, surgeons use a pre-operative image to search for cancer to remove, but the lungs are a moving target. So, to get to the cancer, surgeons deform the lungs on the way in and again on the way out because they are taking a different path. With surgical robotics, you can track the path in and use that same motion tracking data on the way out.

3 types of AI for surgery

We’re also on the cusp of starting to incorporate various AI systems into surgical simulations and procedures. Mohr listed three types of AI she’s personally interested in incorporating into surgical procedures.

  • IBM Watson: Watson is an expert-system type of AI. Watson can store more medical information than any single human can store and and give responses to natural language queries from surgeons. Watson (or AI like it) will become an intelligent surgical assistant.
  • Machine learning algorithms: Unsupervised pattern matching algorithms would aid doctors in recognizing when a sequence of symptoms results in a particular disease. Mohr says, “After all, what is medicine but really good pattern matching?”
  • AlphaGO: During its training, AlphaGo played itself over and over again until new patterns emerged. Mohr imagines we can bring these type of AI into surgical simulations to observe how people learn and to test new learning strategies to answer the question of “how do we take a novice to an expert?”

In closing, Mohr sketched out this next wave of AI and robotics in surgery as a tight partnership between humans and machines, with one making up for the weaknesses of the other.

“I tend to think of robotics as a platform. It’s a platform which we have really advanced very far in terms of being able to reduce the invasiveness of the interventions,” she said. “These next phases are all going to be about integrating a lot of these new technologies onto this platform and being able to potentiate them.”

Want to keep up with coverage from Exponential Medicine? Get the latest insights here.


Image credit: Shutterstock 

A Simple Blood Test Helps Cure Cancer by Catching It Early

Today, it’s an unfortunate reality, but most people have either lost a loved one to cancer or know of someone who has.

Miroculus, a precision medicine startup, wants to create widespread access to affordable early-stage cancer detection. “Having lost loved ones to cancer greatly contributed in the decision to try and tackle this problem,” said CEO Alejandro Tocigl.

The company is building a 3D-printed device called Miriam that will be able to use a small blood sample to diagnose early stage cancer.

The device uses digital microfluidics, a new technology that creates a “lab on a chip” that can be designed with a step-by-step protocol for transferring and analyzing tiny fluid samples.

The company’s first disease focus is gastric cancer. In collaboration with the NIH, Miroculus recently ran a multi-center clinical study in three countries with 650 patients to identify a stomach cancer microRNA diagnostic signature.

The idea for Miroculus was born at Singularity University’s Global Solutions Program in 2013 and has since grown significantly. Now, the company is also working to accelerate research efforts for using microRNA for disease diagnoses, and they’ve developed an open-sourced artificial intelligence (AI) tool called Loom to achieve this.

“We hope to see Loom guiding researchers and clinicians around the globe through the microRNA knowledge database,” said Tocigl.

Using microRNA as a biomarker (indicator) for cancer and disease is showing increasing promise for early-stage disease detection, and recent research has proved this in diagnosing ovarian cancer and lung cancer, among others.

We interviewed Tocigl to learn more about the company’s ambitions and how this exciting technology may advance in the next few years.


Mission: We believe everyone should have access to accurate, affordable, and minimally invasive diagnostic tools for the detection of cancer and other conditions from the earliest stages when they are still easy to treat. We are developing a simple blood test to detect disease at the molecular level on a decentralized, automated, and affordable platform.

Moonshot: Democratize access to early diagnosis.


How does the product work and what core technology is used? Is the 3D-printed Miriam device using the same technology as when it was first prototyped in 2014?

Miriam has advanced to a more sophisticated version that minimizes user intervention and automates the complete test from sample loading to test results reporting. The core technology is digital microfluidics and a proprietary microRNA detection method packaged in an affordable instrument with disposable cartridges.


When Miroculus began, the goal was to use a single blood sample to diagnose cancer. What needs to be overcome to get to this point? What timeframe are you now looking at for the kind of broad cancer detection you envision?

The goal of Miroculus remains the same: a simple blood test to detect disease at the molecular level. We are targeting gastric cancer (GC) as our first application.

GC is one of the most prevalent cancers in emerging economies where affordable and efficient healthcare is in high demand. Based on a single blood test, symptomatic patients and asymptomatic patients will be triaged into a group for further diagnostics, including endoscopy, and patients with no GC cancer indication that would not be sent to endoscopy. Currently, less than 1% of all endoscopies are cancerous.

This would result in significant health cost savings and enable much faster and more affordable testing than the current system. We are currently establishing relationships with regulatory bodies and hospitals to be on the market in 2018. After our first test hits the market, we’ll expand into other types of cancers and other conditions. 

Miroculus’ 3D-printed cancer detection device Miriam is an ambitious undertaking alone. But you’ve also built Loom.bio—a microRNA research AI, which in some ways seems like an open-sourced IBM Watson for health. How central is Loom to the company’s mission? With AI growing in sophistication, how do you hope Loom will impact diagnosis five years from now?

Loom is an up-to-date snapshot of the microRNA literature landscape we built to expedite our own research. Loom is a service that not only lists but also weighs the relationship between microRNAs, genes, and diseases based on all scientific literature available in PubMed and PMC.

The Loom dataset is one of the inputs to our machine learning models to identify relevant microRNAs in a disease of interest, and we are making it accessible and open because we believe it may prove valuable in accelerating research efforts in the microRNA space.

We hope to see Loom guiding researchers and clinicians around the globe through the microRNA knowledge database.

Tools like Loom and the use of natural language processing and machine learning can catapult the diagnostics field to a new level where comprehensive, well-informed diagnosis can be made using multi-layered information and perhaps more than one type of biomarkers to accurately classify a health condition.

What was the motivation behind open sourcing Miroculus’ code?

To provide life science researchers with a tool that can facilitate their studies on microRNAs as well as keep them most up to date with publications relevant to their field. Doing so only enhances their ability to explore the potential of microRNAs and further contribute to the rapidly-growing knowledge database in the field.

In 2014 your CTO, Jorge Soto, was quoted in a Smithsonian article saying that the Miriam device is showing a critical “inflection point in microRNA research” and that using microRNA for cancer detection has a lot of scientific validation but still needs clinical validation. Two years later, where do we stand with clinical validation for using microRNA for early cancer detection?

There are already some microRNA-based diagnostic kits out there. Although the field is still young, we have seen increasing research and clinical validation studies showing very promising results. We expect to see in the near future at least five to ten new diagnostic tests based on microRNAs, not only for cancer detection but also for other conditions.

We have also contributed to the clinical validation by completing a multi-center clinical study with 650 samples in collaboration with NIH, leading to a discovery of the microRNA signature for stomach cancer.

What is something you hold to be true that many people may disagree with?

There is a way to detect cancer early at the molecular level with a simple blood test.


Explore the future of medicine firsthand in San Diego, October 8-11. Meet world-class faculty and innovators from across the biomedical and technology spectrum, build meaningful connections with like-minded medical peers, and learn how to leverage converging technologies to launch yourself to the forefront of health and medicine. Apply now.

Image source: Shutterstock

The 21st Century Is a Wild Time to Be Alive

0

Last week in San Francisco, Singularity University hosted its first-ever Global Summit. In three days, we heard over 100 science and technology experts give talks in more categories than one human mind can fully process.

Whether you attended the conference and need help making sense of the information or missed it and want a taste of the action, I’ve collected Singularity Hub articles on some of the major themes to give you takeaways from the event.

If you’re curious for a look inside the conference, you can watch:

Singularity University Global Summit is the culmination of the Exponential Conference Series and the definitive place to witness converging exponential technologies and understand how they’ll impact the world.


BIG PICTURE

Theme: We’re living in amazing times. As technology permeates almost every aspect of life, industries and institutions need to adapt how they think and operate. That’s easier said than done—the bigger the organization, the harder it is to shift.

“We’re Living During the Most Extraordinary Time Ever in History”
“Founder and Executive Chairman of the XPRIZE Foundation Peter Diamandis kicked off Singularity University’s first ever Global Summit. Diamandis says we’re living during the most extraordinary time ever in history. It’s a time where the power and passion of the human mind is truly being unleashed by the unprecedented power of exponential technologies.” –Alison E. Berman

unspecified-3
Moonshots in education: Leila Toplic and Esther Wojcicki

Why We Need Moonshot Thinking in High School Education
“With about 20% of teens dropping out of high school and 5.6 million Americans between the ages of 16-24 (that’s 1 in 7) disconnected from both school and work, it isn’t too wild to say that we have an engagement crisis in the US…[Education experts Esther Wojcicki and Leila Toplic] both point to moonshot thinking as a way of addressing these challenges.” –Alison E. Berman


ARTIFICIAL INTELLIGENCE

Theme: Machine learning is at the peak of Gartner’s hype cycle. AI is entering an increasing number of industries beyond tech—law, medicine, finance, and manufacturing. We’re seeing heavy investment from big tech companies and also lots of experimentation in the startup ecosystem. The next round of AI has begun.

7 Key Factors Driving the Artificial Intelligence Revolution
“Under, behind and inside many of the apps we use every day, a revolution is underway. It’s a revolution that started decades ago but today is empowering companies to deliver better, smarter services with greater ease and on broader scales than ever before. It’s the artificial intelligence revolution, and it’s changing everything.”–David J. Hill

unspecified-5
Steve Jurvetson and Peter Diamandis

Engineering Will Soon Be ‘More Parenting Than Programming’
“ ‘What’s a crazy idea you believe in that others don’t agree with?’ Peter Diamandis posed this question in an interview with Steve Jurvetson…Jurvetson’s answer was telling, ‘…I think the majority of engineering will not be done in a way where people understand the products of the creation. It’ll be more like an act of parenting than programming. It might take 10 to 15 years before that sentiment is widespread.’…Jurvetson is likely referring to the emerging field of generative design and its possible convergence with deep learning.” –Sveta McShane

  • Read about generative design here.
  • Learn more on AI here.

BIOTECHNOLOGY

Theme: Advances in genomics, genetic engineering, and synthetic biology will change how we feed the planet, have children, and integrate technology into our lives. Biotechnology is becoming a very powerful tool—are we ready for it?

unspecified-6
Geoffrey von Maltzahn

Surprisingly, Plant Microbes May Be an Answer to Our Growing Food Needs
“Geoffrey von Maltzahn, a biological engineer and entrepreneur, argued that we are at an important junction in our history: our biological engineering abilities are maturing so fast that we now have the opportunity to create a healthy, thriving planet and fulfill humanity’s growing needs as well…He believes ‘this will be the century where we actually get to make cathedrals in biology.’” –Sveta McShane

Are We at the Edge of a Second Sexual Revolution?
“According to serial entrepreneur Martin Varsavsky, all our existing beliefs about procreation are about to be shattered again…The second sexual revolution will decouple procreation from sex, because sex will no longer be the best way to make babies.” –Vanessa Bates Ramirez

unspecified-8
Hannes Sjoblad

Biohacking Will Let You Connect Your Body to Anything You Want
Hannes Sjoblad informed the audience that we’re already living in the age of cyborgs…Sjoblad said that the cyborgs we see today don’t look like Hollywood prototypes; they’re regular people who have integrated technology into their bodies to improve or monitor some aspect of their health…Smart insulin monitoring systems, pacemakersbionic eyes, and Cochlear implants are all examples of biohacking, according to Sjoblad.” –Vanessa Bates Ramirez


VIRTUAL REALITY

Theme: Virtual reality and augmented reality are still in early stages in terms of design and mainstream adoption. But even now, the power of immersion is clear. VR and AR won’t be just new mediums of expression; they’ll closely reflect real life.

chris-milk-vr-will-mirror-reality-41
Image credit: Within

VR Pioneer Chris Milk: Virtual Reality Will Mirror Life Like Nothing Else Before (Interview)
Chris Milk, founder and CEO of virtual reality company Within (formerly Vrse), has a vision for the future of stories, ‘I don’t think the future of VR looks like video games; I don’t think it looks like cinematic VR; I think it looks like stories from our real lives.’…‘Imagine being able to live stories that are as rich and formulated and fantastic as the movies you see. That’s what we’re talking about. We don’t have quite the technology to do it, but you can see how it’s possible.’” –Jason Ganz

  • Read more on virtual reality here.

HEALTH AND REGENERATIVE MEDICINE

Theme: Biotech combined with progress in our understanding of biology may help diagnose and fight disease earlier and keep our bodies healthier longer.

unspecified-1
Peter Diamandis

Peter Diamandis: We’ll Radically Extend Our Lives With New Technologies
“Peter Diamandis, cofounder and executive chairman of Singularity University and founder and executive chairman of XPRIZE, believes radically extended life is by no means impossible…Now, modern biology has deepened our understanding of the aging process, and biotechnology is beginning to apply these learnings to spot disease earlier and even regenerate the body. Diamandis highlighted two key areas that are making progress today.” –Jason Dorrier

  • Read more on human longevity here.

TRANSPORTATION

Theme: Self-driving cars are clocking millions of miles on the road, and many major car companies have a stake in the technology. Cars will soon be reimagined to focus on mobility and efficiency, and companies offering carpooling services will continue to grow, as will the power and influence of the sharing economy.

unspecified-7
Brad Templeton

How Self-Driving Cars Will Change It All—From Energy to Real Estate
Brad Templeton informed attendees, ‘Self-driving cars are going to change the world.’…His presentation gave details on the industries and areas of our lives that will be disrupted by the advent of self-driving cars…Templeton concluded with his vision of the car of the future: it will be small, electric, have hundreds of parts rather than thousands, few controls, no dashboard, and limited vehicle-to-vehicle communication.” –Vanessa Bates Ramirez

  • Read more on the future of transportation here.

SPACE

Theme: From the birth of off-Earth manufacturing to probes traveling the solar system and telescopes finding exoplanets—technology is driving space exploration forward. Looking ahead, we wonder when we’ll establish a more permanent presence in space and if other life like us has already done the same thing.

Jill Tarter
Jill Tarter

Finding Intelligent Alien Life Would Offer Hope For Our Own Future
“Are we alone in the universe? We don’t know. But as Carl Sagan said, if we are, it seems like an awful waste of space…The SETI Institute is one way we might find out. By sifting through the electromagnetic chatter of the cosmos, we may find something a little too structured, something only another technological civilization could have produced. This is the world-shaking signal SETI’s after.” –Jason Dorrier

Are There Other Intelligent Civilizations Out There? Two Views on the Fermi Paradox
“Scientists have now discovered a few thousand planets orbiting other stars and, based on these observations, believe there may be as many as 8.8 billion potentially habitable Earth-sized planets in the Milky Way alone. Include stars smaller than the sun and that number increases to 40 billion potentially habitable Earth-like planets…when I stare up at the sky, I still wonder if we’re alone in the galaxy. Could there be another technologically advanced civilization out there?” –Alison E. Berman

  • Read more on space discovery here.

Biohacking Will Let You Connect Your Body to Anything You Want

How many cyborgs did you see during your morning commute today? I would guess at least five. Did they make you nervous? Probably not; you likely didn’t even realize they were there.

In a presentation titled “Biohacking and the Connected Body” at Singularity University Global Summit, Hannes Sjoblad informed the audience that we’re already living in the age of cyborgs. Sjoblad is co-founder of the Sweden-based biohacker network Bionyfiken, a chartered non-profit that unites DIY-biologists, hackers, makers, body modification artists and health and performance devotees to explore human-machine integration.

Sjoblad said that the cyborgs we see today don’t look like Hollywood prototypes; they’re regular people who have integrated technology into their bodies to improve or monitor some aspect of their health. Sjoblad defined biohacking as applying hacker ethic to biological systems. Some biohackers experiment with their biology with the goal of taking the human body’s experience beyond what nature intended.

Smart insulin monitoring systems, pacemakers, bionic eyes, and Cochlear implants are all examples of biohacking, according to Sjoblad. He told the audience, “We live in a time where, thanks to technology, we can make the deaf hear, the blind see, and the lame walk.” He is convinced that while biohacking could conceivably end up having Brave New World-like dystopian consequences, it can also be leveraged to improve and enhance our quality of life in multiple ways.

The field where biohacking can make the most positive impact is health. In addition to pacemakers and insulin monitors, several new technologies are being developed with the goal of improving our health and simplifying access to information about our bodies.

Ingestibles are a type of smart pill that use wireless technology to monitor internal reactions to medications, helping doctors determine optimum dosage levels and tailor treatments to different people. Your body doesn’t absorb or process medication exactly as your neighbor’s does, so shouldn’t you each have a treatment that works best with your unique system? Colonoscopies and endoscopies could one day be replaced by miniature pill-shaped video cameras that would collect and transmit images as they travel through the digestive tract.

Singularity University Global Summit is the culmination of the Exponential Conference Series and the definitive place to witness converging exponential technologies and understand how they’ll impact the world.

Security is another area where biohacking could be beneficial. One example Sjoblad gave was personalization of weapons: an invader in your house couldn’t fire your gun because it will have been matched to your fingerprint or synced with your body so that it only responds to you.

Biohacking can also simplify everyday tasks. In an impressive example of walking the walk rather than just talking the talk, Sjoblad had an NFC chip implanted in his hand. The chip contains data from everything he used to have to carry around in his pockets: credit and bank card information, key cards to enter his office building and gym, business cards, and frequent shopper loyalty cards. When he’s in line for a morning coffee or rushing to get to the office on time, he doesn’t have to root around in his pockets or bag to find the right card or key; he just waves his hand in front of a sensor and he’s good to go.

Evolved from radio frequency identification (RFID)—an old and widely distributed technology—NFC chips are activated by another chip, and small amounts of data can be transferred back and forth. No wireless connection is necessary. Sjoblad sees his NFC implant as a personal key to the Internet of Things, a simple way for him to talk to the smart, connected devices around him.

Sjoblad isn’t the only person who feels a need for connection.

When British science writer Frank Swain realized he was going to go deaf, he decided to hack his hearing to be able to hear Wi-Fi. Swain developed software that tunes into wireless communication fields and uses an inbuilt Wi-Fi sensor to pick up router name, encryption modes and distance from the device. This data is translated into an audio stream where distant signals click or pop, and strong signals sound their network ID in a looped melody. Swain hears it all through an upgraded hearing aid.

Global datastreams can also become sensory experiences. Spanish artist Moon Ribas developed and implanted a chip in her elbow that is connected to the global monitoring system for seismographic sensors; each time there’s an earthquake, she feels it through vibrations in her arm.

You can feel connected to our planet, too: North Sense makes a “standalone artificial sensory organ” that connects to your body and vibrates whenever you’re facing north. It’s a built-in compass; you’ll never get lost again.

Biohacking applications are likely to proliferate in the coming years, some of them more useful than others. But there are serious ethical questions that can’t be ignored during development and use of this technology. To what extent is it wise to tamper with nature, and who gets to decide?

Most of us are probably ok with waiting in line an extra 10 minutes or occasionally having to pull up a maps app on our phone if it means we don’t need to implant computer chips into our forearms. If it’s frightening to think of criminals stealing our wallets, imagine them cutting a chunk of our skin out to have instant access to and control over our personal data. The physical invasiveness and potential for something to go wrong seems to far outweigh the benefits the average person could derive from this technology.

But that may not always be the case. It’s worth noting the miniaturization of technology continues at a quick rate, and the smaller things get, the less invasive (and hopefully more useful) they’ll be. Even today, there are people already sensibly benefitting from biohacking. If you look closely enough, you’ll spot at least a couple cyborgs on your commute tomorrow morning.


Image credit: Shutterstock

5 Big Ideas From Singularity University’s 2016 Global Solutions Program

0

Something big recently happened at Singularity University.

79 participants from 49 different countries graduated from Singularity University’s 10-week flagship Global Solutions Program (GSP).

Over 30 team projects were launched during GSP, each focused on using exponential technology to address a massive global problem, such as water scarcity, malnutrition, and climate change.

Each year at the GSP closing ceremony, five leading teams present their projects.

It’s an exciting moment. The teams have taken on SU’s 10^9 challenge, meaning they’re aiming to launch companies that will positively impact the lives of a billion people in 10 years.

Singularity University (SU) co-founder Ray Kurzweil spoke at the event, highlighting a few key themes of GSP and SU, one of which was the importance of optimism, “You have to be an optimist to be an entrepreneur because if you knew all of the problems you’d run into, you’d never start the business.”

IMG_9472
Ray Kurzweil

Kurzweil then discussed the importance of building a community and ecosystem that understands the power of exponential technology and is filled with the passion needed to take an idea from concept to reality.

This is just the beginning for many of these teams, which is a good thing because we have high hopes for their futures.

Here’s a snapshot of the five team projects presented. 

1) Nutrigene: Microbial engineering to solve micro-nutrition deficiency

Three billion people globally suffer from malnutrition, and one billion people who live near the equator have a vitamin B deficiency—a condition that leads to serious health complications.

Nutrigene is focusing on this growing problem of micro-nutrition deficiency by creating a portable bioreactor, which allows people to harvest their own micronutrients on demand in their homes.

IMG_0472

Their bioreactor will be 3D printed and will use microbial engineering to homebrew the nutritional supplements. The team believes their product will be significantly more effective and affordable than current options.

The Nutrigene team: Benito Juarez, Constanza Gomez Mont, Min FitzGerald, Van Duesterberg


2) Afriji: Affordable access to refrigeration for all

More than a billion people around the world live without proper refrigeration. For many of these individuals, there isn’t an effective and low-cost refrigeration alternative. One dire impact of this is that over a million children die each year due to spoiled vaccines.

IMG_0180

Afriji is designing a low-cost refrigeration alternative by creating a core refrigeration module using thermoelectric technology.

The team is working to enhance the efficiency of thermoelectric technology—a technology that is affordable, low maintenance (with no moving parts), and is compatible with many off-grid energy sources.

According to Afriji their module “can be incorporated into any kind of box [refrigeration structure], whether a household fridge or vaccine storage box.”

The Afriji team: Micah Melnyk, Theodor Lundberg, Marvin Ngcongo, Sven Lidstroem, Danny Wagemans, Andrew Skotzko


3) Basepaws: Pet genetics for improving human drug targeting

There’s a huge need in the medical field for more open data on genetics to encourage medical breakthroughs. As it turns out, household pets carry up to 90% of the same genes as their owners. Now, this connection between human and animal genetics is being used to improve human medicine.

The Basepaws team is working to improve drug targeting by analyzing related animal gene modules. The team has already begun extracting DNA from pet hair follicles to do so.

IMG_0305

For their first product, Basepaws wants to build a genetic testing kit for household cats. The kit will be paired with a mobile app and a wearable for the pet that sends critical data to an associated vet.

The team says, “This cross-species database of genotypic and phenotypic data will be the first of its kind to positively affect the health of members of the family.”

The Basepaws team: Shan Zhao, Anya Skaya, Olof Huldt, Audrey Chaing, Plinio Guzman


4) ReBeam: Energy distribution by beaming solar energy through space

Our planet soaks up enough energy from the sun each day to power the world. But not all that energy makes it to the surface due to cloud cover, and of course, we all experience hours of darkness every night. While some believe the answer is better energy storage and more ground-based infrastructure to move energy around, ReBeam is suggesting a space-based solution.

IMG_0495

The team is aiming to place microwave reflectors in orbit. The reflectors would take in energy beamed up from solar farms soaking in sunlight and send it back down to solar farms in the dark.

During the program, the team conducted a case study of energy transmissions from the Sahara Desert to London to demonstrate the cost decrease that their method would provide. They hope to have the space-based section of the system deployed by a SpaceX Falcon Heavy rocket launch.

The ReBeam team: Gadhadar Reddy, Alexandre Paris, Jordi Bas Espargaro


5) udexter: Artificial intelligence to solve technological unemployment

The team behind udexter is looking at a future where technical unemployment may become a serious challenge. They’re building an AI—Dexter—to help individuals from multiple career backgrounds to find meaningful work and stay engaged in the ever-shifting job landscape.

IMG_0372

Their software is currently in its very early stages. It is aggregating user data while helping customers learn about their career purpose and their current occupation’s resiliency to technological unemployment. The software also provides psychometric assessments.

In the long term, the Dexter AI will help people identify new career paths that are more integrated with oncoming technological changes and also identify resources to help people make strategic career transitions.

The udexter team: Jenny Appel, Muriel Clauson, Laurent Boinot, Pablo Orduña


If you’re curious to learn about the teams from the GSP 2015 class, check out these five startups to watch from the 2015 Global Solutions Program.

Image credit: Nick Otto Photography

Banner Image Credit: Connect world / Shutterstock.com

Singularity University Comes Home: Global Summit Kicks off Today in San Francisco

Singularity University’s inaugural Global Summit is kicking off today in tech capital San Francisco and running through August 30th.

The Singularity Hub team will be on the ground, covering some of the best speakers, and bringing you live Facebook interviews to give you a taste of the magic too.

SU’s three exponential summits all have a unique industry focus—finance, medicine, and manufacturing.

But the focus of Global Summit is to go broad, showcase trends in emerging technologies, and explore how they’re converging within several industries.

Experts in deep learning such as Jeremy Howard and AI thought-leader Neil Jacobstein will have a fireside chat on the future of machine learning and AI. Authorities in education Esther Wojcicki and Leila Toplic will explore new moonshots in the education sector.

Alex Filippenko, a well-known professor of astronomy and physical sciences at University of California, Berkeley, will enlighten us on the frontiers of space exploration and share what we need to know about exoplanets.

Additional speakers at Global Summit will dive into subjects including:

  • New business opportunities created by advances in robotics and AI.
  • Ways AR/VR can enhance creativity and innovation.
  • How nanotechnology will improve cancer treatment therapies.
  • How to use minimal resources to start a movement.
  • Why the maker movement matters.

Recent updates on core emerging technologies:

Below is a series of Singularity Hub articles covering some recent breakthroughs in core technologies—AI, augmented reality, nanotech, biotech, transportation, and energy—which will be central to the conversation at Global Summit.

Be sure to join the conversation in real-time on Twitter with @SingularityHub and @SU_GlobalSummit or using the hashtag #GSummit.

Singularity University Global Summit is the culmination of the Exponential Conference Series and the definitive place to witness converging exponential technologies and understand how they’ll impact the world.


ARTIFICIAL INTELLIGENCE
IBM’s New Artificial Neurons a Big Step Toward Powerful Brain-Like Computers
“Thanks to a sleek new computer chip developed by IBM, we are one step closer to making computers work like the brain. The neuromorphic chip is made from a phase-change material commonly found in rewritable optical discs (confused? more on this later). Because of this secret sauce, the chip’s components behave strikingly similar to biological neurons: they can scale down to nanometer size and perform complicated computations rapidly with little energy.”
–Shelly Fan

AUGMENTED REALITY
Pokemon Go Is a Glimpse of Our Augmented Reality Future
“Pokémon Go is already a huge phenomenon and is well on its way to overtaking Twitter in terms of daily active users. By now, you’ve probably seen almost as many articles about Pokémon Go as I’ve seen Pidgeys in the game (quite a lot)…In a time when it sometimes feels like our technology is pulling us further and further apart, I had incredible, authentic, and most importantly, human interactions thanks to this game.”
–Jason Ganz

 NANOTECHNOLOGY
How Nanotech Will Lead to a Better Future for Us All
“In the last decade, nanotechnology has advanced and is finding practical applications. Some teams are developing nanoscale patterns on medical implants that can stimulate bone cell growth and positive gene expression. Others are working to make guided nanoparticles that detect (and even destroy) cancer cells.”
–Alison E. Berman

BIOTECH
Chisels to Genes: How We’ll Soon Grow What We Used to Build
“All around the natural world, we witness life forms which, driven by the programming of their DNA, produce massive, complex things from tiny beginnings. As George Church suggested, ‘A minuscule fertilized whale egg produces an object as big as a house. So maybe one day we can program an organism, or a batch of them, to produce not the whale but the actual house.’ Neri Oxman of MIT, also imagines a world where instead of building, we’ll be able to grow more.”
–Sveta Mcshane

TRANSPORTATION
Carpool Apps Are on the Rise—Here’s How to Make Them Go Big
“The cell phone ride hail apps like Uber and Lyft are now reporting great success with actual ride-sharing, under the names UberPool, LyftLines and Lyft Carpool. In addition, a whole new raft of apps to enable semi-planned and planned carpooling are out making changes.”
–Brad Templeton

ENERGY
Meet the Reactors Accelerating Us Toward Fusion Energy
“Traditional nuclear reactors split atoms to create energy. These fission reactors run on processed uranium and leave behind radioactive waste. Fusion, on the other hand, is the same process that keeps the sun shining. Fusion reactors would run on abundant hydrogen isotopes and, in theory, create significantly more energy than fission with comparatively little waste.”
–Marc Prosser

Image credit: Shutterstock

The Future of Healthcare Is Arriving—8 Exciting Areas to Watch

As faculty chair for Medicine and Neuroscience at Singularity University and curator of our annual Exponential Medicine conference (apply to join us this Oct 8–11th), I cross paths with many technologies which have potential healthcare applications. Some are still nascent and not yet close to clinical use (nanobots in our blood, 3D printed organs from your own stem cells), but many others are gaining traction and appearing in our homes, our pockets, and entering clinical settings faster than many might imagine.

There remain significant regulatory, reimbursement, data privacy and adoption challenges (to name a few), but below are eight examples of fast moving, often convergent technologies which are already beginning to be applied effectively to health, prevention, diagnosis, therapy, clinical trials and beyond.

xmed-promo-1


1) The Connected, Healthy, Interactive Home

Pharma, device and consumer health companies are racing to build apps on Amazon Echo and soon-to-arrive platforms like Google Home. I’ve had an Amazon Echo for three months, and it has quickly become second nature to speak to it and call up a favorite song, keep up with the news, check the weather, order a product, add to my calendar or even summon an Uber.

Soon, Echo-like devices will become major healthcare interfaces — talking to your medical Internet of Things devices (i.e., wearables, scale, blood pressure cuff and glucometer) and perhaps, based on your genomics, diet, activity and blood sugar, suggesting the appropriate meal to have delivered or prepared. “Alexa, call 911” may become a routine way of calling for help.

Here is an early example of an Amazon Echo programmed to run a daily wellness check using the Sense.ly platform. The Kids MD app, built by Boston Children’s Hospital, offers simple advice to parents about fever and medication. And looking beyond Alexa, even more interactive social robots like Mabu, the personal healthcare companion from Catalia Health and consumer-focused social robots like Jibo — both developed by MIT Media lab alums and faculty — are coming to market.

As more elements in our homes have sensors measuring the health of our bodies and environment — from WiFi tracking of vital signs to incorporating data ranging from weather and pollen counts to neighborhood influenza outbreaks — the connected home and health-related interaction will become commonplace.

xmed-promo-takeaways-1

Download the summary and key takeaways from Exponential Medicine 2015.

2) From Medical Tricorders to Connected Home Medical Kits

The blending of home-based diagnostic platforms with medical care at home is arriving. The Tricorder XPRIZE competition is well underway, with several teams set to compete in the final stages. Leading contenders include CloudDx and Scanadu, a company started at our first Exponential Medicine program, have successfully leveraged crowdfunding to enable their clinical trials.

Gale by 19Labs is a next generation “first aid kit meets home health center” (see the below video for a demo) exemplifying how integration of home diagnostics paired with menu-driven (and potentially AI-driven) assistance and optional telemedicine connectivity can provide increased access to home-based diagnosis, triage and management of minor bumps and scrapes and also more complex medical conditions.

3) The Healthcare Chatbot

Interactive and engaging, from coaching on diet and nutrition to reminding you to take your medications or offering psychological support and follow up — the chatbots are on their way.

Lark is a terrific example of a fun and witty AI sharing relevant personalized data tracking and insights about diet and exercise that may help manage chronic disease.

Three winners of our 2015 Exponential Medicine MEDy Awards (Medical Entrepreneurship and Disruption) exemplify what is possible today. Sensely has developed an AI-enhanced virtual assistant “who” acts as a medical companion and monitor. It has demonstrated an ability to reduce hospital readmissions for heart failure patients. For mental health, X2AI developed “Tess’’ a conversational psychological AI. Software-based “emotional analytics” from Beyond Verbal can parse your voice and can add mental health context.

Watch these startups present at Exponential Medicine 2015 here, and apply to compete and demo your startup at Exponential Medicine this October.

As next-generation assistants like Viv (created by Siri’s founders) continue to advance, these intelligent interfaces will further enable highly personalized and complex services which cross into health and medicine.

4) VR in the OR to AR on the Streets

VR and AR are going mainstream.

On the AR front, Google Glass (despite reports) is not dead. Glass is being leveraged by companies like Augmedix for physician “scribing” enabling remote note-taking to save physician time. Pediatric neuropsychiatry platforms like Brain Power are using Glass to help autistic children learn and gamify emotional cues.

Microsoft Hololens co-developed VR for anatomy and interactive physiology with Case Western medical school for educating medical students. Simple versions for AR have been put on interactive T-shirts (check this out). Next-generation AR headsets like those from Meta will have a slew of applications for clinician and patient. Of course, gaming and AR has come to the outdoors with Pokémon Go, a game showing both physical and mental health benefits.

With the launch of Oculus Rift and HTC Vive  to consumers this year, a plethora of medical education programs, like VR Anatomy, are now in use. Others are leveraging VR experiences to reduce patients’ pain (and the need for opiates).

I was in London this April and in the operating room with surgeon Shafi Ahmed and his Medical Realities team as they livestreamed the world’s first VR surgery (try it out on their website). Over 4,000 viewers from around the world watched the surgical case in real-time VR. See Dr. Shafi Ahmed and Dr. Rafael Grossman’s talk at Exponential Medicine 2015 for more on VR/AR in healthcare.

5) From Quantified Self to Quantified Health

As wearables and connected health devices proliferate we can easily be overwhelmed with data and develop a “so-what” attitude unless that data leads to manageable and actionable insights. No individual, let alone clinician, wants to log in to multiple apps or interpret raw data streams. Integrating digital data with clinicians’ workflow will be critical for adoption and the realization of the promise of digital health.

Google Fit is now adding health-data exchanges, and already, Apple HealthKit is connected to over 30 healthcare systems. Data can flow from my iPhone to my electronic medical record at Stanford. I did a single authorization on my phone and a week later received a note from my primary care physician noting that my shared data looked good (nice to know he is watching).

Increasingly, software will parse the data from a variety of data sources.

Startups like Sentrian are making sense of remote patient data and preventing unnecessary admissions. Health systems such as the United Kingdom’s NHS are beginning to prescribe connected health technologies in trials of digital health coaching for chronic conditions such as diabetes. As demonstrated by Oschner Health System, use of smartwatches for notifications and smartphones connected to blood pressure cuffs significantly improved outcomes in treatment of high blood pressure. This work is summarized in the Exponential Medicine 2015 talk by cardiologist Dr. Robert Bober.

6) Uber for Health Is Here

Blended with telemedicine, combined services can respond as needed for a true house call that leverages a combination of home diagnostics with hands-on care. Startups like Pager and Heal have raised millions and are providing on-demand physician house calls, some of which are covered by major payers. ZipDrug is doing the same for pharmacy delivery.
Concerns over this Uberization model persist, but as consumer behavior and expectations for on-demand services continue and payers expand coverage, they will likely expand. In the San Francisco area, first of its kind startup Honor provides a rewire for in-home care for seniors to match caregivers and needs. And indeed Uber is partnering with hospitals to get patients to checkups.

7) Cancer Moonshots

Of course, technology is only one element in addressing unmet needs across cancer prevention, screening and therapy. Further progress (and development of potential cures) requires alignment of incentives, policy, and regulations. This is exemplified by the White House Cancer Moonshot initiative lead up by Vice President Biden, which I attended in late June (see some observations and takeaways from the summit ). New policies speeding up FDA approvals, patent protection and data sharing have been implemented. There is even a new Cancer XPRIZE (which I’m involved in developing) under development will helpfully align incentives and speed up novel collaborations and approaches to decrease preventable cancer deaths.

8) The ‘Omes Come Home and Are Being Crowdsourced

From genome to microbiome and metabolome, it is becoming exponentially more common to send in a sample from home and obtain personal ‘omic information. 23andMe began the movement to consumer-empowered genomics and are now leveraging their data from 23andWe data donors to enable faster, novel discoveries. The company published a 450,000-customer study this month  uncovering a major trove of genetic clues to the causes of depression. And newer players like Veritas Genetics now offer $999 whole-genome sequencing and targeted genetic cancer risk testing.

We are also rapidly uncovering the importance of the microbiome in health and disease, and Ubiome and Second Genome now offer home kits which enable personal microbiome sequencing and the ability to anonymously be a ‘data donor’ share the data to improve its utility.

As “Systems Medicine” evolves — integrating exponentially increasing data from genomics, microbiome, imaging, digital health, environmental information and more — we are now seeing, in the last year, the launch of services by Arivale and Human Longevity’s Health Nucleus, where early adopters can have a variety of data collected, tracked and analyzed, with the goal of generating  actionable information for personalized health, prevention and therapy


The eight areas mentioned above are just a taste of technologies and platforms rapidly entering healthcare. Some are still waiting for proof of value, aligning of incentives and the further connecting of the dots between various gadgets, data, apps, and integration into medical systems to be fully realized. Given the many challenges in healthcare around the planet, new thinking, creative technology applications, and talented people (often from outside the traditional healthcare sphere) are needed to bring these solutions to full realization.


Looking to address challenges, understand the cutting edge, and contribute to the future of health and medicine? Join Singularity University this October 8–11th for Exponential Medicine 2016. Over 60 world class faculty and 50 selected startups in our Innovation Lab will join 500 selected participants from across the healthcare and technology spectrum for 4 days of convergence, talks, workshops, an unconference, beachside bonding and more.

xmed-promo-4

Related Links

*Takeaways from Exponential Medicine 2015 (40 pages)

*Exponential Medicine Talks (Watch mainstage presentations from prior programs)

*Singularity Hub Coverage from Exponential Medicine 2015

IBM’s New Artificial Neurons a Big Step Toward Powerful Brain-Like Computers

2

Thanks to a sleek new computer chip developed by IBM, we are one step closer to making computers work like the brain.

The neuromorphic chip is made from a phase-change material commonly found in rewritable optical discs (confused? more on this later). Because of this secret sauce, the chip’s components behave strikingly similar to biological neurons: they can scale down to nanometer size and perform complicated computations rapidly with little energy.

ibm-artificial-neuron-chip-3 (1)
Image Credit: IBM Research

What makes them especially amazing is how they “fire.” They integrate previous input history to determine whether or not to activate. They also show a characteristic trait of biological neurons called stochasticity — that is, when given a similar input, the chip always produces a slightly different, unpredictable result. Stochasticity is the basis of population coding, a type of highly efficient computation that relies on groups of neurons working together. This neuronal quirk was previously tough to mimic using artificial materials.

The chip adds to previous brain-like computing memristors, says Dr. C. David Wright at the University of Exeter to Singularity Hub. It’s a huge leap forward for “building dense, large-scale, interconnected synapses to provide fast neuromorphic processors,” he says.

Brain-like computation

Scientists have long dreamed of making computers that mimic the massive parallel computational ability of the brain’s neuronal networks. That’s a hefty goal.

“Brains fuse together processing and memory tasks…using surprisingly little energy and occupy a remarkably small volume,” explains Wright. The human brain consumes about 10 to 20 watts of power and occupies less than 2 liters of space, he says. Traditional silicon transistor-based circuits, with tough-to-shrink capacitors, are simply too clunky to cram into brain-like circuits. They also process information serially in strings of binary digits, a far cry from biological neural computation.

So how do neurons work?

In a nutshell: a neuron receives input through long cables called dendrites. This input changes the electrical potential across its cell membrane. The neuron keeps track of various input signals that occur over a small time window and integrates them. When the aggregated signal reaches a certain threshold, the neuron bursts into activity and generates a spike. The spike is then passed down the output cable — the axon — and transmitted to downstream neurons through small mushroom-shaped blobs called synapses.

This “integrate-and-fire” principle heavily relies on the biophysics of the neuronal membrane. Previous neuromorphic chips mostly focused on mimicking information processing at the synapse, paying little attention to how neurons actually fire. And that’s where IBM’s new chip differs: it eschews the synapse, opting instead to simulate the generation of spikes in a neuron.

“In a complete system, of course, we need both neurons and synapses,” says Wright, so being able to mimic both in hardware is huge.

The phase-change chip

To build the chip, the team enlisted a phase-change material to play the part of a neuronal membrane. The material, a chalcogenide alloy, exists in two physical phases — a glassy, almost liquid-like amorphous state and a solid, crystalline state — that rapidly switch when the material is zapped with electricity.

Each phase has its own electrical properties, making it easy to determine what state the material is in — an ideal situation for storing binary data. Here, the amorphous phase insulates, whereas the crystalline state conducts.

The artificial neuron begins in the amorphous, insulating state. When given multiple pulses of electricity (“inputs”), it progressively crystalizes until it reaches a certain threshold. At that point, the material becomes solid enough to conduct electricity, which causes it to fire an output spike. If this sounds familiar, you’re right: that’s exactly how integrate-and-fire works in biological neurons. After a brief period of rest, the chip shifts back to the amorphous state, ready for another cycle.

ibm-artificial-neuron-chip-1
A chip with large arrays of phase-change devices that store the state of artificial neuronal populations in their atomic configuration. In the photograph, individual devices are accessed by means of an array of probes to allow for precise characterization, modeling and interrogation. Image Credit: IBM Research

What’s more, due to the manufacturing process and variable internal atomic states, the chip is inherently stochastic. That’s a big deal.

“Stochasticity is an essential ingredient for constructing ‘neuronal populations’ and our brain naturally uses these to represent signals and cognitive states,” says lead author Dr. Tomas Tuma.

So what can the new chip do?

To test the power of their phase-change neurons, the team engineered a mushroom-shaped gadget consisting of a 100-nanometer-thick layer of chalcogenide alloy sandwiched between two electrodes. That counts as a single neuron. In one demonstration, the team generated 1,000 streams of binary data, of which 100 of them were statistically correlated — that is, some streams showed a weakly similar pattern to others (note this is a “toy” dataset without any real-life meaning).

Fishing out correlations like these is generally tough to do since it requires a computer to simultaneously look at multiple streams and compare the information in real-time. However, a single artificial neuron managed to pick out every correlation using very little power.

That’s a computational task of surprising complexity, notes Wright.

“When applied to social media and search engine data, this leads to some remarkable possibilities, such as predicting the spread of infectious disease, trends in consumer spending and even the future state of the stock market,” he writes in a comment piece published alongside the study in Nature Nanotechnology.

To check out the scalability of their neurons, the IBM team interconnected 100 phase-change devices in a 10-by-10 array and strung five arrays together to form a population of 500 artificial neurons. The team then fed this artificial network a stream of broadband signals, which contained rates higher than the firing rates of individual neurons.

Here’s the cool part. Because each neuron is stochastic, their combined activity — the so-called population code — was sufficient to adequately represent the signals without additional costly operations. In other words, the network functioned far above the computational limits of its single components. And it did so using just a spark of power: on average, the network only required about 120 microwatts.

“This is important for building dense, scalable neuromorphic systems for memory applications and computing,” explains Tuma. For example, they could power machines with co-located memory and processing units, thus shattering the bottleneck of traditional Von Neumann computers, in which memory and processing are physically separated.

Wright agrees that the chip has significant potential, but also warns of its issues. The limited number of times that these devices can be switched before failure could significantly limit processor lifetimes, he writes. Shifting the device back to the amorphous state after an activation cycle is also energy consuming, which could become a concern once these artificial neuron arrays get larger.

That said, Wright is incredibly impressed with the chip.

“Phase-change and memristor devices can work up to a million times faster than the processing speeds of the human brain, we can imagine some very powerful computing systems,” he says.

Now comes the hard part: writing software that takes maximal advantage of the chip’s computational prowess.


Banner Image Credit: IBM Research

We Might Live in a Virtual Universe — But It Doesn’t Really Matter

You might have heard the news: Our world could be a clever computer simulation that creates the impression of living in a real world. Elon Musk brought up this topic a few weeks ago. Truth be told — he is probably right. However, there is a very important point missing in this whole “real vs. fake” discussion: It actually makes no difference. But first…why might our world be a simulation?

Musk is nowhere near the first one to suggest our world might be fake. The idea reaches back to the ancient Greeks, though what we call a computer simulation, the ancient Greeks called a dream.

The first thing to realize is this: Our perception of reality is already separate from reality itself.

To paraphrase Morpheus from the movie The Matrix, reality is simply an electrical impulse being interpreted by your brain. We experience the world indirectly and imperfectly. If we could see the world as it is, there would be no optical illusions, no color blindness and no mind tricks.

Further, we only experience a simplified version of all this mediated sensory information. The reason? Seeing the world as it is requires too much processing power — so our brain breaks it into heuristics (or simplified but still useful representations). Our mind is constantly looking for patterns in our world and will match them with our perception.

From this we can conclude the following:

Our perception of reality is already different from reality itself. What we call reality is our brains’ attempt to process the incoming flood of sensory data.

If our perception of reality is dependent on a simplified flow of information, it doesn’t matter what the source of this information is — whether it’s the physical world or a computer simulation feeding us the same information. But is it really possible to create such a powerful simulation?

Let’s see by taking a look at the universe from a physical point of view.

The Basic Laws of the Universe in a Nutshell

our-world-might-be-simulation-does-it-matter-81

From a physical point of view, four basic forces underlie everything: the strong force, electromagnetic force, weak force and gravitational force. These forces govern every interaction of every particle in the known universe. Their combination and equilibrium make up all there is.

Calculating these forces and simulating simple interactions is fairly easy, and we are already doing it — at least to some extent. It gets complicated once you add more and more particles interacting with each other — this, however, is just a question of computational power and not feasibility.

Right now, we lack the computational power to simulate the whole universe. Physicists would even argue that simulating the universe in a computer is impossible — not because of the complexity, but because a computer that simulates the universe would be bigger than the universe itself. Why? You would need to simulate every particle and would thus need multiple bits and bytes to store the position, spin and type of each particle and then do the calculations with those.

You don’t need a physics PhD to recognize the impossibility of this endeavor. However, there is a flaw in this type of thinking that results from the mathematical mindset most physicists employ.

There is a big difference between simulating the whole universe and creating the virtual feeling of living in a whole universe.

Welcome the heuristics — again. Many computational scenarios would be impossible to solve if our human mind could not easily be tricked. From real-time computing to moving pictures and video streams (which include quite heavy audio/video delays) to ping delays and many other things. They make us feel as if everything is continuous and normal when there is quite a lot of trickery involved.

The basic pattern is always the same: Reduce the details to a level with the best compromise between quality and complexity where our mind won’t notice the difference.

There are many tricks we can use to reduce the computational power needed to simulate a universe to a degree we can handle. The most obvious being: Don’t render anything no-one is looking at. If you feel a slight tingling sensation in your body, this might be because you are familiar with Heisenberg’s uncertainty principle and the observer effect. Modern physics tells us reality is such that the state of the smallest particles is dependent on whether they are being observed.

Next trick you could use: Make the universe seem vast and limitless even though it isn’t. This one is actually used quite a lot in video games. By reducing the details on far away objects you can save huge amounts of computational power and generate objects only when they are discovered. If this sounds hard to grasp, please take a look at the game No Man’s Sky — a video game in which a whole virtual universe is being procedurally generated while you discover it.

Last but not least: Add basic physical principles that make it amazingly hard or impossible to reach any other planet and keep the simulated beings stranded in their own world (speed of light and exponentially expanding universe — cough, cough).

If you combine these “cheats” with some mathematical trickery like reusable patterns and basic fractal geometry, you end up with a fairly good heuristic-based simulation of our universe — a universe that seems almost endless and infinite but is little more than a reality hack. This, however, still does not explain why Musk (and others) say there’s a high probability we are part of a virtual universe.

Let’s have a look.

The Simulation Argument and Mathematics

world-simulation-does-it-matter-1 (1)

The simulation argument is a logical deduction proposed by Oxford University philosopher Nick Bostrom. It is based on some prerequisites that, depending on your view of each, can lead to the conclusion that our universe is most likely simulated. This is straightforward:

1. It is possible to simulate a universe (we covered this point above).

2. Every civilization either goes extinct (the pessimistic view) before it is technologically able to simulate a universe; loses interest in the development of simulation technology; or continues to advance and eventually reaches the technological level that is capable of simulating a universe and will do it. It’s just a matter of time. (Would we do it? Of course we would…)

3. Once achieved, this society will create many different simulations resulting in uncountable numbers of simulations. (Everyone wants to have a universe of their own.)

4. Once a simulation reaches a certain level, it too will create simulations of its own (and so forth).

If you do the math, you will soon get to the point where you have to recognize the probability of living in a real world is very slim because it is simply dwarfed by the number of existing simulations.

From this point of view, it is more likely that our world is 20 levels deep in a vicious simulation cycle than it being the original world.

The first time I heard this argument I got scared because the thought of living in a virtual universe is kinda…scary. However, here is the good thing: It doesn’t matter, and I’ll tell you why.

“Real” Is Just a Word and Information Is the Currency

We already covered how our perception of reality is very different from reality itself. Let’s assume for a minute, that our universe is a computer simulation. This assumption calls for another logical deduction chain:

1. If the universe is simulated it is basically a combination of bits and bytes (or qBits or Snags or whatever…) — essentially information.

2. If the universe is information or data, then so are you. Every one of us is.

3. If we are all information, then our bodies are simply a representation of this information — like an avatar.The best thing about information is: It is not bound to a certain object. You can copy, transform and change it any way you like (all you need are the proper coding techniques).

4. Any society that is capable of simulating a virtual world is also capable of giving your “personal” information a new avatar (because this requires less knowledge than simulating a universe).

Altogether, this means you are basically information, and the information that defines you is not bound to a certain object like your body. Philosophy and theology have long debated the concept of duality between our body and soul (mind, uniqueness — whatever you name it). So, the concept should sound familiar to you — this is just a more rational explanation for the phenomenon.

Let’s conclude.

Reality is information, and so are we. A simulation is part of the reality that simulates it — and everything we further simulate is reality from the perspective of those being simulated.

Reality is, therefore, what we experience: From a physical point of view, there is no objectivity in the quantum space — only a very subjective perspective on things. There are even some widely accepted theories claiming that every object we see could be the projection of information at the other side of the universe — or any other universe.

So, in essence: Everything is “real” if you experience it. And a simulated universe is as real as the universe that simulates it because reality is defined by the information it represents — no matter where it’s physically stored.


Image credit: PeteLinforth/PixabayNASA, ESA, J. Dalcanton, B.F. Williams, and L.C. Johnson (University of Washington), the PHAT team, and R. Gendler; Dinkum/Wikimedia Commons

Be on the Winning Side of Disruption: SU Global Summit San Francisco

There’s never been a better opportunity to see the future first-hand. Join the most innovative minds in business and technology, along with Singularity University faculty and alumni at the first-ever SU Global Summit, August 28-30, in San Francisco.

The future is incredibly hard to predict, but not for the reasons we normally think. The truth is, not only are new technologies advancing quickly, but how they’re converging and influencing one another kicks the pace up another gear. The result? The future is approaching faster than we can imagine. This concept, and the opportunity to leverage accelerating technologies to solve real human challenges, will be the central themes explored over three days this August.

The Singularity University Global Summit, happening August 28-30 in San Francisco, is bringing the brightest minds together for a three-day conference to begin tackling the world’s biggest challenges and give participants a look at the future of technology and business. This will be the definitive place to meet innovators and understand what business, technology and government will look like in the next 10 years.


With world-renowned speakers delivering their unique perspective on topics like artificial intelligence, robotics, energy, nanotechnology and 3D printing, data and machine learning, finance and economics, networks and computing, medicine and neuroscience, bioengineering, bioinformatics, space, and security, as well as the Global Grand Challenge Awards highlighting the most impactful startups—SU Global Summit will cover the most critically important knowledge as we dive headfirst toward tomorrow.

Singularity University’s co-founder and a director of engineering at Google, Ray Kurzweil predicts that “by 2029, computers will have emotional intelligence and be convincing as people.” This may seem hard to believe, but it’s just one surprising outcome representing a staggering amount of opportunity in the coming years. The SU Global Summit will take a deep dive into seemingly far-off opportunities, reveal why they’re not as far away as they seem, and connect those who understand the importance of these converging technologies.

Will you be successful in the world of tomorrow? Will you make an impact? Apply to attend SU Global Summit and join us in San Francisco this August to make sure that you do.


Image credit: Tim Benedict Pou/FlickrCC

The Tools of Change Are Here: What Will You Do With Them?

Digital connectivity is a defining characteristic of the 21st century. And though it’s an often criticized aspect of modern society, it’s also making us more aware of our fellow human beings. News has never spread so rapidly across the globe or so widely illuminated global problems that desperately need more attention.

“It’s painful when we see an incident halfway across the world that’s grotesque,” said Ray Kurzweil this week at the Global Solutions Program (GSP) opening ceremony. “But it’s fundamentally a good thing because it harnesses our empathy to solve these problems…This is one world.”

i-dmkMHRp-X3
Ray Kurzweil

Now in its 8th year, GSP is hosting 79 participants from 40 different countries, 16 global impact competitions, and has 49% women in attendance.

Their wild 10-week journey to form companies that use technology to address a global grand challenge began this week. The group includes an entrepreneur working with a bioprinted liver on a chip for drug discovery and a startup founder whose company turns waste from water purification into fertilizer.

i-DvRRDG4-X3
Keynote speaker Dr. Dava Newman, deputy administrator of NASA.

At the opening ceremony, the overarching call to action for the new GSP class was very clear. We’re living in the most connected and democratic time in history. We have more abundant access to information and technology than ever before. Now is the time to do something meaningful with it—we’re all in this world together.

It’s a sentiment inspired in no small part by space exploration, an endeavor in which courageous pioneers using cutting-edge technologies look down on the whole Earth.

Keynote speaker Dr. Dava Newman, deputy administrator of NASA.
Dr. Dava Newman, deputy administrator of NASA.

Keynote speaker Dr. Dava Newman, deputy administrator of NASA and former aeronautics and astronautics professor at MIT, spoke about progress in space exploration and NASA’s goals for the future. Newman showed off detailed (and now iconic) snapshots from the New Horizons flyby of Pluto, noted that the Juno mission is scheduled to arrive at Jupiter in July, and looked ahead to the James Webb Space Telescope, which will look further out into the universe than even Hubble.

Of course, interplanetary travel has long been the domain of satellites and rovers, but Newman also outlined plans for human exploration beyond the Earth and Moon. She said NASA is planning a new rover on Mars by 2020 and boots on Mars by 2030.

journey_to_mars
NASA image: Journey to Mars

Space exploration is about many things, but perspective is maybe one of its greatest gifts. We now know our sun is one of billions of stars in the galaxy, and our galaxy is one of billions in the universe. The Earth is but a blue mote of dust captured in the famous 1990 Voyager 1 image that inspired Carl Sagan to write:

“There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we’ve ever known.”

Throughout Newman’s talk, she repeated the phrase, “We’re all in this together.” And Kurzweil agreed. More and more of us can access the tools of change, and with them society is ready to step beyond the world’s barriers and boundaries.

i-hrsxKGn-X3

This year’s GSP class will learn all about today’s most powerful tools—like artificial intelligence, robotics, sensors, and biotechnology—and dream up technology-inspired entrepreneurial projects they hope can positively impact a billion people.

It’s a daunting task to be sure, but the future belongs to the bold.

As CEO of Singularity University Rob Nail said, “We all have the ability to affect humanity. Why would we do anything less?” 

Rob Nail, CEO of Singularity University
Rob Nail, CEO of Singularity University

Image credit: Singularity University, NASA

What You Need to Know About the Future of Money

If you don’t hate banks, they probably bore your socks off. That’s okay. The day-to-day machinations of the financial machine are either mundane or massively complicated. But what happens in finance should matter to everyone. Like a lot. If you were on the planet during the 2008 financial panic, you know this.

It’s pretty simple. How we make money, spend money, save money, send money, and grow money is central to how we do pretty much everything else. The future of all these things is being hashed out right now, and technology is a huge driving force. This fast-growing, money-focused corner of the tech world is called “fintech,” which means…where finance meets technology. Easy enough.

But it’s a little more than that.

Finance has been computerized for decades. An ungodly number of daily trades are executed by algorithm. The speed of the market is superhuman—on the order of microseconds—and finance’s population of wonks is probably second only to tech.

Most of that, however, is about what’s happening inside finance’s hallowed halls.

Fintech is more about how the ethos of startups, apps, the internet, and all things digital has begun to infiltrate Wall Street, taking aim at long-standing business models. It’s the promise that small teams coding software can be corporate killers.

This is a world where trust is redefined as a decentralized network of computers so massive no one can tamper with it; where your local bank branch isn’t down the street, it’s in your pocket; where pennies from the crowd launch a thousand startups; where money moves frictionlessly across the globe; where your financial advisor is a piece of software, and the market goes, “Look ma, no hands.”

Is it hype? Sure. But hype is often built on a foundation of amazing things.

At Singularity University and CNBC’s Exponential Finance conference last week, we got our annual state of the union from that fascinating point where finance and tech cross paths.

The End of the Middle Man

If there’s been a theme to the fintech hype machine in recent years, it’s been the end of the financial middleman. Bitcoin true believers, blockchain converts, crowdfunders, crowdlenders, crowdborrowers—peer-to-peer reigns supreme.

And it should. If we’ve learned anything from the digitization of other industries, it’s that centralized businesses tend to suffer when the internet intervenes. So, is the financial system next in line? There are certainly those who think so.

Bitcoin was just the beginning. Bitcoin’s tech foundation—known as blockchain—has been almost as hot as the notorious digital currency in the last few years.

Many have proclaimed, it’s not about Bitcoin—blockchain is where the real potential lies. But the two aren’t as separate as we might like to think.

According to Catheryne Nicholson, CEO and cofounder of BlockCypher, no distributed ledger like blockchain would work without the incentive to validate transactions—and digital currency (in the form, for example, of Bitcoin mining) is that incentive.

“I think what really makes blockchain disruptive is you have an entire network that’s validating. They don’t do it for free,” Nicholson said. “They’re being incented to do it, and their incentive is a bitcoin or an ether. If you remove that reward, what is the incentive for that entire network to secure it?”

At the conference, Nicholson announced her company, BlockCypher, a sort of Amazon Web Services for blockchain, recently began hosting Ethereum. Ethereum provides a digital currency, but they’re also focused on smart contracts.

“[Blockchain] allows you do all sorts of things that typically required a middleman, required multiple parties of trust in order to execute. What Ethereum does is enable you to do that in a ‘smart contract’ way, but do it on blockchain so you don’t necessarily have to have a person or an entity be that trusted party.”

Blockchain and digital currencies may mean no more fees for ferrying cash between pockets, a vastly simplified financial back office—which today is consumed by clearing and validating transactions—and even companies that fund and then run themselves.

Middlemen. Who needs ‘em?

Further conference reading:

Sleeping With the Enemy?

There’s no doubt the next wave of tech in finance has disruptive potential. But if you were expecting the end of big banks—you’ll have to wait awhile longer.

David and Goliath have decided on détente for the time being. A recurring theme throughout the conference was partnership.

Jessie McWaters, a World Economic Forum (WEF) project lead, described how in 2014 the WEF brought together a group of 50 financial services leaders and asked what they thought about fintech. The response was largely dismissive—we have the scale, trust, and experience navigating regulatory complexity.

In 2015, the tone was different. Banks were worried, jittery. Then this year, McWaters said, the discussion had evolved again.

“There was almost now a little bit of swagger. People felt like they had figured out the solution,” McWaters said. “It was about working together, and there was a view that this wasn’t just a threat to financial services, though it was that, but there was more to it. There was the opportunity…to fix financial services.

McWaters said fintech isn’t “random acts of violence.” Many firms are focused on friction points where customers get frustrated, while elsewhere they’re deciding to “piggyback on legacy infrastructure.” McWaters said Apple Pay didn’t disrupt credit cards, but instead focused on customer experience, its area of expertise.

Big companies and scrappy startups are either collaborating, becoming each other’s customers, or acquiring and being acquired.

In a panel, the founders of two startups talked about their experience being gobbled up by larger companies. Meanwhile, Catherine Bessant, COO and CTO of Bank of America, said big firms like hers should partner, purchase, and when appropriate, compete with fintech—but above all stay engaged.

“What I do know about the next 20 years is that we are monomaniacally, freakishly focused on making sure that we leverage technology so that whatever you want from us in 20 years is what we’re capable [of]…and effective at doing.”

Eyes in the Back of Your Head

Much of this about technology that’s mainly focused on financial operations—from customer service to the back office. But larger forces are at play in the wider economy that will ultimately affect how business in finance is done.

Brad Templeton, director of the EFF and founder of ClariNet, noted how the adoption of a seemingly unrelated technology, like self-driving cars, could have a major impact on insurers.

“Even the financial industry, the auto loan industry will change when people go to fleets instead of retail auto loans,” Templeton said. “Insurers are going to see casualties drop to one third of what they are [now] and [be] mostly self-insured by the companies building the self-driving cars.”

Along these lines: How might earlier detection of disease affect insurers? How will longer lifespans (and healthspans) change financial advice and retirement planning? How might solar energy affect futures and commodities markets?

Templeton also discussed quantum computing. All our financial security online is based on the idea that it’s very hard to factor large numbers—this is how most data encryption works right now. But a massively powerful quantum computer (which isn’t here just yet) could make short work of such security measures.

“You look out on Wall Street and if you see the bankers and the programmers running with their hands up in the air screaming—someone has done this. That’s your clue,” Templeton said.

Further Conference Reading:

What’s Your Plan for 2025?

Though fintech and bigfin are besties for now, that’s no excuse to get complacent. Technology moves too fast for that, and only the future-focused will survive.

Templeton said, “If you tell me, ‘Here is my plan for 2015’, I’ll say, ‘Very interesting plan, but I only can tell you one thing about it—it’s wrong.’ You have to make your plan for 2025.”

Ray Kurzweil began looking at the evolution of technology so he could better time his inventions. When he began noticing exponential trends in computing, he realized he had to plan for future events happening sooner than expected.

And that’s perhaps the best summation of the conference’s real central theme: When making your plan for 2025, don’t forget to take into account the exponential nature of these technologies—go beyond the linear, and prepare to be surprised.

In an interview via a Beam telepresence robot with CNBC’s Bob Pisani, Kurzweil said the key trends outlined in his book The Singularity Is Near are right on track.

“We just updated [our computational charts] to 2014,” Kurzweil said. “I had this chart for the first time in 1981 (through 1980). I projected it through 2050. So, it’s now 35 years later, and [it’s] exactly where it should be.”

Further Conference Reading:

Image credit: Ink DropShutterstock.com

Ray Kurzweil’s Four Big Insights for Predicting the Future

Self-driving cars, virtual reality games, bioprinting human organs, human gene editing, AI personalities, 3D printing in space, three billion people connected to the Internet….

These incredible technological feats are all part of our world today. And while they are not evenly distributed, they are rapidly spreading and evolving — and in the process radically changing nearly every aspect of modern life. How we eat, work, play, communicate, and travel are deeply affected by the development of new technology.

But what is the underlying engine that drives technological progress? Does technological change progress at a steady rate? Can we predict what’s coming in 5 or 10 years?

To answer some of these questions, our team decided to dig into Ray Kurzweil’s 2005 book The Singularity Is Near, in which Kurzweil describes the exponential growth of technologies like artificial intelligence, genetics, computers, nanotechnology and robotics.

Kurzweil, while not right all the time, has one of the best track records in terms of predicting technological breakthroughs. So, he must be onto something, right?

Here are the top big insights from the book we think you should definitely know about…

Technology Feels Like It’s Accelerating — Because It Actually Is

In this piece, we explore Kurzweil’s big idea that every generation of technology stands on the shoulders of the last generation — that is, we use our best tools to build even better ones — and the rate of progress continues to speed up from version to version.

Kurzweil calls this process of technological evolution the “law of accelerating returns.”

Kurzweil outlines many examples of the law of accelerating returns in action. Here is a recent and particularly powerful one: In the last decade or so, as genome sequencing technology has gotten better and faster, the cost of sequencing a human genome has fallen from hundreds of millions of dollars to roughly $1,000.

Previously, genome sequencing technology was only accessible to governments and corporations — but it is now accessible to the average consumer.

Credit: Alison Berman
Credit: Alison Berman

We’ve seen this type of trend over and over — where technology gets faster, cheaper and more accessible — but nowhere is it more apparent than in Moore’s Law, which describes the decades-long exponential rise of computing. The steadiness of Moore’s Law is what made the smartphone in your pocket possible  ….  But is there an end in sight for Moore’s Law, and if so, will it mark the end of accelerating progress in computing (and other related tech)?

 

Will the End of Moore’s Law Halt Computing’s Exponential Rise?

“In brief, Moore’s Law predicts that computing chips will shrink by half in size and cost every 18 to 24 months. For the past 50 years it has been astoundingly correct.”

–Kevin Kelly, What Technology Wants

According to Kurzweil, Moore’s Law (describing the exponential growth of integrated circuits) is just one example of the law of accelerating returns, but it is perhaps the most powerful. An increasing number of related technologies across industries are driven by computing speed and power — and therefore also move at the pace of Moore’s Law.

However, after decades of going strong, it looks like Moore’s Law might be running out of steam. But does that mean it’s the end of exponential progress in computers?

Kurzweil is confident the answer is no.

There have been five distinct computing technologies already. These include electromechanical, relay, vacuum tubes, transistors, and finally, the integrated circuits described by Moore’s Law. Combined, their progress shows a smooth exponential curve—and we’re already working on the technology to take the next step.

Read on here to dig deeper into Moore’s Law and beyond.

How to Think Exponentially and Better Predict the Future

So, now we know that the law of accelerating returns is driving technological progress and that this acceleration is likely to continue  — what can we do about it? Look ahead, of course.

Predicting the future is a messy business and most people get it wrong. But there seems to be an innate desire to foresee what’s coming and be better prepared for it.

How do Kurzweil’s exponential trends translate into how we choose to live our daily lives and think about the world around us?

linear-vs-exponential-41
Credit: Alison Berman

Much of the time, we tend to think that tomorrow will be like yesterday (the linear view) and fail to account for the exponential growth factor. New technology growing exponentially tends to progress deceptively slowly at first, but then its progress shoots upward and very quickly becomes disruptive.

Read on here to learn what it means to ‘think exponentially’ and better account for the deceptive and disruptive changes we see in technological growth.

Ray Kurzweil Predicts Three Technologies Will Define Our Future

It’s clear that our brains tend to anticipate the future linearly instead of exponentially, and now we also know that the law of accelerating returns will bring more powerful technologies sooner than we imagine. So, what should we expect to see in the next couple of decades?

In this piece we explore three technological areas Kurzweil believes are poised to change our world the most this century: genetics, nanotechnology and robotics/AI.

The genetics revolution will allow us to reprogram our own biology. The nanotechnology revolution will allow us to manipulate matter at the molecular and atomic scale. The robotics revolution will allow us to create a greater than human non-biological intelligence.

There you have it: you can now confidently discuss The Singularity Is Near at your next dinner party, and you’re well on your way to predicting the future just like Ray Kurzweil.


Image credit: Rob Bulmahn/FlickrCC

Machine Learning’s Next Trick Will Transform How Research Is Done

Though research is a slow moving and rigid process, one study shows that the rate of scientific study has exploded in the last 50 years. According to the paper, humanity’s scientific output now doubles every nine years. Considering the rigors of science — that’s pretty fast. And it’s just the average rate. In specific areas like healthcare, the doubling rate is even faster — as much as every 3 years currently with an expected increase to every 73 days by the early 2020s.

For overwhelmed researchers navigating the growing stack of science literature — the value isn’t in having so much new information, but finding relevant insights when they need them.

According to Jacobo Elosua, a co-founder of Iris AI — a Singularity University portfolio company — the research process is very often tedious and unfruitful.

ai-research-assistant-3“Researchers are genuinely struggling to find the scientific papers, the clinical data, and other information required to do their job. And when they do find it — it’s most often after a painful and time consuming process,” he told Singularity Hub.

Elosua and the team at Iris hope recent advances in machine learning AI might be one way through the noise. Machine learning is powerful because it allows programmers to assign a task to an algorithm — in this case, combing through scientific literature — and then let the code teach itself to improve its model as it is fed more data over time.

Iris works by reading scientific papers and learning to determine what’s being discussed in the text. The goal is to augment the discovery process by leading researchers to relevant papers and new discoveries as they are published. By identifying emerging trends and concepts within the areas of science that may impact a researcher’s domain of interest, AIs can shoulder some of the burden of constantly scanning new literature.

According to Elosua, “Iris users will be able to drop in any scientific text with over 500 words as an input to the tool — say like an abstract of an interesting paper. Iris will then display a visual map enabling an intuitive navigation of the most relevant papers.” Elosua added that, “In terms of time saved we believe it will be more than ten times faster to use Iris.”

The promise of an accelerated research process is exciting, but hurdles remain. Though global trends in academia have shown a shift to open access, many research papers are locked away in closed databases. Also, Iris’s proof of concept scans the science literature contained in TED talks, a fairly broad set of areas. Iris is currently working to develop more specialized ways to use their service.

Another Singularity University start up, Miroculus, is hoping that their more targeted machine learning tool may help with their own research needs.

The team at Miroculus — in partnership with Microsoft — have built Loom, a tool that uses machine learning to search papers for the relationship between specific microRNAs and various diseases and genes. Though Miroculus’ core business is developing a low-cost cancer diagnostic tool, the Loom project may prove valuable to research efforts in their space.

MicroRNA is a type of RNA found in the bloodstream that delivers genetic instructions telling the body what proteins to build and when to build them. In a TED Talk, Miroculus CTO Jorge Soto explains that microRNAs help regulate gene expression. And since changes in gene expression are a major component of cancer, understanding how microRNAs vary depending on conditions in the body — and measuring these changes — may help us diagnose cancer far earlier than today’s standards.

In the talk, Soto describes how catching cancer early is the closest thing we have to a silver bullet cure against it. But there’s a problem. Soto says, “There is no compelling way to access much of the microRNA research today, other then to manually retrieve relevant papers and read them thoroughly.” This can take days or even weeks in some cases.

He hopes that by having a way to quickly track microRNA literature, his team will be able to understand the latest findings in the space. In an interview with Singularity Hub, Soto said, “With Loom, our objective is to provide a compelling overview of how microRNAs relate to specific diseases and genes.”

ai-research-assistant-6Loom is able to locate relevant papers that mention specific microRNAs, extract the relevant parts of the paper, and then score the relationship between the microRNA and the specific gene or disease being studied. According to Soto, Loom was trained by learning from a manually created dataset that curated over 10,000 mentions of microRNAs, and the tool becomes more accurate every day as more literature is published.

As AIs take on more responsibility in managing the discovery process, the science community may free up significant portions of the time they currently devote to scanning for trends. One Canadian AI company, Meta, can already scan for emerging technology trends and predict those technology’s future significance.

In parallel, as AIs learn to better navigate the subtleties of language, they may be better equipped to draw meaning from science literature. Earlier this month, for example, Google announced exciting progress in natural language understanding and open-sourced machine learning code in the area, which may further empower AI-assisted research tools.

Though science is moving fast — maybe too fast for our brains to handle — projects like Iris and Loom are out to show how AI can help today’s researchers keep up with today’s accelerating pace.


Interested in learning more about our economic future? Join leading manufacturing experts at Singularity University and CNBC’s Exponential Finance conference June 7-8, 2016 in New York.

Image credit: Shutterstock.com

See the Future of Finance Unfold at Exponential Finance 2016

From payment processing to corporate banking, the financial industry is being turned upside down by exponential technologies such as artificial intelligence, digital currencies, robotics, nanotechnologies, crowdfunding, and computing systems. Startups are scheming ways to solve consumer needs while titans are struggling to turn ships on a dime and react to young new players.

We may not see the downfall of the world’s largest institutions right away, but it’s safe to say that these technologies are rewriting the future of finance. Recognizing this reality and planning for it early will allow you to survive and thrive in an increasingly tumultuous global industry.

Singularity University and CNBC’s Exponential Finance Summit was created to bring the financial and tech industries together in a deliberate and meaningful way. Think Wall Street meets Silicon Valley. Now in 2016, Exponential Finance is the definitive place to learn, connect, collaborate and reinvent the financial industry on an annual basis.

Exponential Finance 2016 will be held June 7 and 8 in New York City, featuring world-renowned leaders sharing their insights into how exponential technologies are affecting the financial industry and what you can do to be a part of it. CNBC’s Bob Pisani and SU’s Salim Ismail will emcee, while speakers will include heavy hitters like Bank of America COO/CTO Catherine P. Bessant, GE Chief Economist Marco Annunziata, Abra Founder and CEO Bill Barhydt, GoldBean Founder and CEO Jane Barratt, and many others.

As Ismail wrote in his book Exponential Organizations, “Today, if you’re not disrupting yourself, someone else is; your fate is to be either the disrupter or the disrupted. There is no middle ground.”

Exponential Finance aims to give participants an interactive and collaborative experience and send them home with an understanding of what the future will look like and how to act on it now. Participants will have the opportunity to see demos from more than 40 groundbreaking technology companies and connect with business leaders from leading firms across the industry.

In short, missing Exponential Finance would mean falling behind.


Apply here to join Singularity University, CNBC, and a few hundred of the world’s most forward-thinking financial leaders at Exponential Finance this June.

Image Credit: Shutterstock.com

Stay Ahead of the Next Industrial Revolution With Exponential Manufacturing

Self-driving cars, delivery drones, 3D printing, robots, and artificial intelligence. All heavily used news headlines, and all technologies that will change the way people buy, sell, make, interact, and live. New technologies are arriving at an exponentially increasing pace, and the global market is trying to keep up.

At the center of this change lie the companies that create the products of tomorrow.

Whether it’s a personalized 3D-printed car or large-scale fabrication in space, the opportunities for financial success and human progress are greater than ever. Looking to the future, manufacturing may begin to include never-before-seen approaches to making things using nanotechnology and even biology.

That’s where Singularity University’s Exponential Manufacturing summit comes in.

Hosted in Boston, Massachusetts May 10 and 11, Exponential Manufacturing is a meetup of 400 of the world’s most forward-thinking manufacturing leaders, investors and entrepreneurs. Speakers will dive into the topics of artificial intelligence, robotics and UAVs, synthetic biology, digital fabrication, nanotechnology, big data, and smart sensors and networks.

11-11-15_Peter Diamandis-15Alongside emcees Peter Diamandis and Salim Ismail, Deloitte’s John Hagel will discuss how to handle major shifts in industry, Neil Jacobstein will focus on R&D powered by AI and machine learning, and Jay Rogers and Danielle Applestone will share their learnings from the world of rapid prototyping. These prolific innovators will be joined by David Roberts (HaloDrop, 1QBit, and more), Marcus Shingles (XPRIZE), Deborah Wince-Smith (Council on Competitiveness), and many others.

Now, more than ever, there is a critical need for companies to take new risks and invest in education, simply to stay ahead of emerging technologies.

In his book Exponential Organizations Ismail writes, “In the future, the defining metric for organizations won’t be ROI (Return on Investment), but ROL (Return on Learning).” And Peter Diamandis says, “If the risk is fully aligned with your purpose and mission, then it’s worth considering.”

There’s little doubt we’re entering a new era of global business, and the manufacturing industry will help lead the charge. Learn how by exploring Exponential Manufacturing online, or apply now to join us in Boston this May. As a special thanks for being a Singularity Hub reader, use the code SUHUB2016 during the registration process to save $500 off the current pricing.

This is the first year Singularity University has hosted Exponential Manufacturing. Click here to learn more and register today.

Image credit: Shutterstock.com

Will the End of Moore’s Law Halt Computing’s Exponential Rise?

This is the first in a four-part series looking at the big ideas in Ray Kurzweil’s book The Singularity Is Near. ​Be sure to read the other articles:


“A common challenge to the ideas presented in this book is that these exponential trends must reach a limit, as exponential trends commonly do.” –Ray Kurzweil, The Singularity Is Near

Much of the future we envision today depends on the exponential progress of information technology, most popularly illustrated by Moore’s Law. Thanks to shrinking processors, computers have gone from plodding, room-sized monoliths to the quick devices in our pockets or on our wrists. Looking back, this accelerating progress is hard to miss—it’s been amazingly consistent for over five decades.

But how long will it continue?

This post will explore Moore’s Law, the five paradigms of computing (as described by Ray Kurzweil), and the reason many are convinced that exponential trends in computing will not end anytime soon.

What Is Moore’s Law?

“In brief, Moore’s Law predicts that computing chips will shrink by half in size and cost every 18 to 24 months. For the past 50 years it has been astoundingly correct.” –Kevin Kelly, What Technology Wants

moores-law-graph-11
Gordon Moore’s chart plotting the early progress of integrated circuits. (Image credit: Intel)

In 1965, Fairchild Semiconductor’s Gordon Moore (later cofounder of Intel) had been closely watching early integrated circuits. He realized that as components were getting smaller, the number that could be crammed on a chip was regularly rising and processing power along with it.

Based on just five data points dating back to 1959, Moore estimated the time it took to double the number of computing elements per chip was 12 months (a number he later revised to 24 months), and that this steady exponential trend would result in far more power for less cost.

Soon it became clear Moore was right, but amazingly, this doubling didn’t taper off in the mid-70s—chip manufacturing has largely kept the pace ever since. Today, affordable computer chips pack a billion or more transistors spaced nanometers apart. 

end-of-moores-law-23

Moore’s Law has been solid as a rock for decades, but the core technology’s ascent won’t last forever. Many believe the trend is losing steam, and it’s unclear what comes next.

Experts, including Gordon Moore, have noted Moore’s Law is less a law and more a self-fulfilling prophecy, driven by businesses spending billions to match the expected exponential pace. Since 1991, the semiconductor industry has regularly produced a technology roadmap to coordinate their efforts and spot problems early.

In recent years, the chipmaking process has become increasingly complex and costly. After processor speeds leveled off in 2004 because chips were overheating, multiple-core processors took the baton. But now, as feature sizes approach near-atomic scales, quantum effects are expected to render chips too unreliable.

This year, for the first time, the semiconductor industry roadmap will no longer use Moore’s Law as a benchmark, focusing instead on other attributes, like efficiency and connectivity, demanded by smartphones, wearables, and beyond.

As the industry shifts focus, and Moore’s Law appears to be approaching a limit, is this the end of exponential progress in computing—or might it continue awhile longer?

Moore’s Law Is the Latest Example of a Larger Trend

“Moore’s Law is actually not the first paradigm in computational systems. You can see this if you plot the price-performance—measured by instructions per second per thousand constant dollars—of forty-nine famous computational systems and computers spanning the twentieth century.” –Ray Kurzweil, The Singularity Is Near

While exponential growth in recent decades has been in integrated circuits, a larger trend is at play, one identified by Ray Kurzweil in his book, The Singularity Is Near. Because the chief outcome of Moore’s Law is more powerful computers at lower cost, Kurzweil tracked computational speed per $1,000 over time.

This measure accounts for all the “levels of ‘cleverness’” baked into every chip—such as different industrial processes, materials, and designs—and allows us to compare other computing technologies from history. The result is surprising.

The exponential trend in computing began well before Moore noticed it in integrated circuits or the industry began collaborating on a roadmap. According to Kurzweil,  Moore’s Law is the fifth computing paradigm. The first four include computers using electromechanical, relay, vacuum tube, and discrete transistor computing elements.

end-of-moores-law-5-paradigms-6

There May Be ‘Moore’ to Come

“When Moore’s Law reaches the end of its S-curve, now expected before 2020, the exponential growth will continue with three-dimensional molecular computing, which will constitute the sixth paradigm.” –Ray Kurzweil, The Singularity Is Near

While the death of Moore’s Law has been often predicted, it does appear that today’s integrated circuits are nearing certain physical limitations that will be challenging to overcome, and many believe silicon chips will level off in the next decade. So, will exponential progress in computing end too? Not necessarily, according to Kurzweil.

The integrated circuits described by Moore’s Law, he says, are just the latest technology in a larger, longer exponential trend in computing—one he thinks will continue. Kurzweil suggests integrated circuits will be followed by a new 3D molecular computing paradigm (the sixth) whose technologies are now being developed. (We’ll explore candidates for potential successor technologies to Moore’s Law in future posts.)

Further, it should be noted that Kurzweil isn’t predicting that exponential growth in computing will continue forever—it will inevitably hit a ceiling. Perhaps his most audacious idea is the ceiling is much further away than we realize.

How Does This Affect Our Lives?

Computing is already a driving force in modern life, and its influence will only increase. Artificial intelligence, automation, robotics, virtual reality, unraveling the human genome—these are a few world-shaking advances computing enables.

If we’re better able to anticipate this powerful trend, we can plan for its promise and peril, and instead of being taken by surprise, we can make the most of the future.

Kevin Kelly puts it best in his book What Technology Wants:

“Imagine it is 1965. You’ve seen the curves Gordon Moore discovered. What if you believed the story they were trying to tell us…You would have needed no other prophecies, no other predictions, no other details to optimize the coming benefits. As a society, if we just believed that single trajectory of Moore’s, and none other, we would have educated differently, invested differently, prepared more wisely to grasp the amazing powers it would sprout.”


To learn more about the exponential pace of technology and Ray Kurzweil’s predictions, read his 2001 essay “The Law of Accelerating Returns” and his book, The Singularity Is Near.

Image Credit: Shutterstock, Intel (Gordon Moore’s 1965 integrated circuit chart), Ray Kurzweil and Kurzweil Technologies, Inc/Wikimedia Commons/CC BY

[vc_message style=”square” message_box_color=”grey” icon_fontawesome=”fa fa-amazon”]We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.[/vc_message]

Atlas Robot Is More Capable (and Human) Than Ever in Latest Video

7

If you aren’t convinced the pace of robotics is accelerating, you need only check out the new video from robotics pioneer Boston Dynamics. The group’s latest humanoid robot tramps through the snow, stacks boxes, and even gets up after being pushed over.

We’ve watched the steady, sometimes surprising evolution of Boston Dynamics’ robots for years. The group’s four-legged robots Big Dog and Alpha Dog were early viral hits, and later on we were stunned (and maybe a little frightened) by a video of its humanoid robot walking a treadmill in fatigues and gas mask.

A later version of that two-legged robot, Atlas, was piloted by a number of teams at the Darpa Robotics Challenge (DRC) last year. The event had successes and failures, but may be most popularly remembered for a viral video of robots falling over.

The new version of Atlas already seems like a big improvement. Here are a few notes on what makes Boston Dynamics’ top humanoid robot tick:

More human than ever? The robot’s abilities and design make it look pretty human. It’s hard not to feel bad watching it get pushed around.

It’s lost a few pounds (and inches). The older version of Atlas was built like an NFL lineman—6’2” and 345 pounds. The new version is 5’9” and a slimmer 180 pounds.

It’s tetherless. Before the DRC, Atlas required a cord (or tether) to an external power source. Now, it roams free with a battery pack on its back.

It’s got great balance. Atlas is shown roaming rough and slippery terrain without falling—an amazing feat for a two-legged robot. Atlas is packed with sensors for balance and uses LIDAR and stereo sensors in its head to map and navigate its surroundings.

It’s quiet. Early tetherless Boston Dynamics bots powered by internal combustion engines were notoriously noisy. Atlas is electrical and therefore much quieter.

It’s speedy. Many robotics videos, including those from the DRC, are played at 10x or 20x their true speed. This latest video shows Atlas moving at normal speed.

When it falls, it gets back up. In the video, Atlas takes a hard fall and gets back up on its own—an important ability that was largely (but not totally) lacking at the DRC.

It’s an Alphabet-bot. Boston Dynamics is owned by Alphabet (formerly Google), one of eight companies the tech giant acquired in 2013.

Image credit: Boston Dynamics/YouTube

An Inside Look at the SU Labs Startup Accelerator: The Entrepreneur’s Journey

Do you consider yourself an entrepreneur aspiring to launch a startup? Increasingly, you’re not alone. Developing technologies are opening entrepreneurship up to more people, while successful new startups are popping up around innovation hubs worldwide.

Even with more tools available than ever, the entrepreneur’s path remains full of challenges.

A startup accelerator is a relatively new phenomenon in the entrepreneurial community. Traditionally offering critical infrastructure, support, finances, and real-time connections, accelerators provide backing traditional companies wouldn’t have from the start.

This hands-on educational and professional training covers a lot of ground. Often the lessons learned over the curated courses would take years in real-time business operations otherwise.

Last year, Singularity University Labs kicked off its first accelerator. The 10-week program welcomed early stage startups using the latest technologies to make a positive impact in the world. Want a peek under the hood? The videos selected below offer an inside look alongside critical insights gained from the program.


Have an early stage startup leveraging exponential technology? Applications for the Spring 2017 SU Labs Startup Accelerator are open until October 31stClick here to apply.


The Entrepreneur’s Journey (Weeks 1 and 2)

“Some of the most productive growth in a successful company often comes not from progress—but the trials and tribulations that shape the organization from its earliest stages. Getting creative about the problems that you’re facing in each stage, to me, is the essence of being a great entrepreneur.” –Randy Haykin (Partner, Haykin Capital & Gratitude Network)


 Financial Structuring (Weeks 3–5)

“I think we’re going to see an end of the category of the large corporation over the next decade or two, and so I think all of the advantages now with start-ups because you have access to accelerating technologies, the cost is low, and you can think boldly. You can take on risks that any big company can’t take on.” –Salim Ismail (Global Ambassador, Singularity University)


 Inside the Mind of a VC Firm (Weeks 6–9)

“If you’re building something that the market genuinely needs, truly wants, and is desperate to have, a lot of the other problems fall by the wayside.” –Garrett Dunham (SU Labs, Accelerator EIR)


 SU Labs Startup Accelerator Week 10 and Highlights

“The thing that excites me the most, and what’s driving the whole ethos here at Singularity University is the notion that today, the world’s biggest problems are the world’s biggest business opportunities. And what we teach is: Want to become a billionaire? Help a billion people. Looking to create a great company? Find an amazing problem and solve it.” –Peter Diamandis (Co-Founder, Singularity University)

The selected videos featured above offer a guided tour through the 10-week experience—but that isn’t all. Watch the rest of the series here.


The Founder’s Mission of SU Labs Startup Accelerator

The Founders of the inaugural SU Labs Startup Accelerator share their company mission statements and personal motivations in their quest to solve the world’s greatest challenges


Image Credit: Shutterstock.com

How to Never Forget a Name? In the Future, We’ll Just Google Our Brain

1

Have you ever walked into a room and forgot why you where there? Or while in the middle of conversation forgot a person’s name? Or briefing your boss on a project, only to stumble because a crucial factoid escaped your mind?

Yeah, me too.

“Tip of the tongue” syndrome haunts us all — that feeling where you’re close to remembering something, but just can’t seem to get there. But what if, at that exact moment, an AI-powered “cognitive assistant” pitches in and delivers that missing piece of information straight into your ear?

google-your-brain-3That future may soon be here. In a patent published late last year, IBM described a sort of “automatic Google for the mind”: one that monitors your conversations and actions, understands your intentions and offers help only when you need it most.

The brainchild of computational neuroscientist Dr. James Kozloski, a master inventor at IBM Research, the cognitive digital assistant has lofty goals: by acting as an external memory search module, it hopes to help people with memory impairments regain the cognitive ability to navigate through life with minimal help.

For the rest of us? A searchable memory could give us the opportunity to make innovative connections, support brainstorming sessions and help us tackle more problems and think more deeply.

In a recent interview with the Atlantic, Kozloski laid out his plans for a human-AI mind-meld future.

Context Is Key

To understand how an AI cognitive assistant works, we first need to look at why human memory fails.

One reason is context. We excel at memorizing stories — the whats, whos, whens and wheres. When we remember an event, we fit its different components together like a puzzle; because of its linked nature, any component can act as a trigger, fishing out the entire memory from the depths of our minds.

Yet often we have trouble finding the trigger: the memory is there, but we can’t access it. Some current apps — to-do lists, scheduling apps, contact lists — already help us remember by acting as a trigger. But they can’t help someone who needs a reminder to update and use those apps in the first place.

IBM’s cognitive assistant hopes to bridge this gap.

Acting as a model of the user’s memory and behavior, it surveys our conversations, monitors our actions and — using Bayesian inference, a probabilistic algorithm often used in machine learning — predicts what we want, detects when we need help and offers support.

If you’re thinking “whoa, that’s creepy,” you’re not alone.

google-your-brain-4But according to Kozloski, we are already constantly monitored by our electronic devices. A Fitbit tracks your heart rate and movement, a sweat analyzer checks for dehydration and fatigue, augmented reality devices listen in on your conversations to offer real-time translations and suggest potential replies.

And the future of trackers is only getting more sophisticated and personal.

These data, combined with data from your environment, is then fed into the cognitive assistant. With enough data, the AI can compute a model of what a person is thinking or doing.

By analyzing word sequences and speech patterns, for example, it may detect whether you’re talking in a business setting or with a family member. It could similarly also monitor the words of your conversation partner and, using Bayesian inference, make an educated guess about who he or she is.

If you suddenly experience a word block, the AI would make a note of where the conversation lapsed. Then, using data from your previous speech recordings and the Internet, it could offer up words that you most likely had in mind for that particular context.

The system would work even better if your partner also wears a cognitive assistant device, Kozloski suggests. In that case, the two devices could share data to build a better model of what information you’re trying to access at that very moment.

If all this sounds abstract, here’s an example.

Imagine you’re calling a friend with whom you haven’t talked to recently. From the dial tone or your wrist movement, the cognitive assistant tracks the number that you dialed. From there, it figures out who you’re calling, and crosschecks its database for previous conversations, calendars and photos related to that person.

It then gently reminds you — through an earpiece, speakers or email — that last time you talked, your friend had just begun a new job. By scanning your texts, it notes that several weeks ago she had booked a tattoo appointment — her first! — that was now coming up.

All of this information sits primed and ready — all before your friend picks up — just in case you want a friendly reminder.

How — and if — you want the data delivered is up to you, stresses Kozloski. That’s the thing: the cognitive assistant would only pitch in when you want it to.

“It would be very annoying if it were continually interrupting you,” he said.

The assistant could come with a preset threshold for jumping in. For example, it could detect pauses in your speech or actions, and through machine learning, understand the “tells” of when you’re confused. This data helps the assistant automatically adjust its threshold.

Direct human feedback would also contribute to the assistant’s accuracy, allowing a truly personalized experience.

By catering to the individual’s cadence or idiosyncrasies it could build a better model of what’s normal for the user, and what’s not, Kozloski said.

Personal Care

An obvious application for the assistant would be for people suffering from memory loss.

“In early stages of Alzheimer’s, a person can often perform everyday functions involving memory,” wrote Kozloski in the patent.

As memory loss becomes more severe, the person would begin to experience the devastating results of cognitive breakdown, he explains. They won’t be able to take their medication on time. They might miss important appointments. They may even lose the ability to interact with other people, to dress themselves or cook meals.

google-your-brain-5In these cases, a cognitive assistant would not only help the user by giving them friendly reminders, it could also monitor the person’s cognitive decline over time.

For example, are they forgetting something more frequently? Is it a memory or a motor task? Is the user straying from his or her usual routine?

The assistant could “perhaps prevent side effects of what are otherwise sort of innocuous episodes of forgetting,” said Kozloski.

Kozloski is careful to address privacy and security issues that could arise from uploading your digital self to the assistant.

“…The invention includes security mechanisms to preserve the privacy of the user or patient. For example, the system can be configured to only share data with certain individuals, or to only access an electronic living will of the patient in order to determine who should have access if the user is no longer capable of communicating this information,“ he writes.

The system may adopt other security measures, but for now Kozloski is focusing on the device itself.

Even if Kozloski’s idea fails, it’s easy to imagine that something similar may take its place. IBM’s cognitive assistant, combined with augmented reality, virtual reality and brain-machine interfaces suggests that we are on the fast track towards a new way of life. It’s a human+machines future.

Image Credit: Shutterstock.com

How to Build a Starship — and Why We Should Start Thinking About It Now

With a growing number of Earth-like exoplanets discovered in recent years, it is becoming increasingly frustrating that we can’t visit them. After all, our knowledge of the planets in our own solar system would be pretty limited if it weren’t for the space probes we’d sent to explore them.

The problem is that even the nearest stars are a very long way away, and enormous engineering efforts will be required to reach them on timescales that are relevant to us. But with research in areas such as nuclear fusion and nanotechnology advancing rapidly, we may not be as far away from constructing small, fast interstellar space probes as we think.

Scientific and societal case

There’s a lot at stake. If we ever found evidence suggesting that life might exist on a planet orbiting a nearby star, we would most likely need to go there to get definitive proof and learn more about its underlying biochemistry and evolutionary history. This would require transporting sophisticated scientific instruments across interstellar space.

But there are other reasons, too, such as the cultural rewards we would get from the unprecedented expansion of human experience. And should it turn out that life is rare in our galaxy, it would offer opportunities for us humans to colonize other worlds. This would allow us to spread and diversify through the cosmos, greatly increasing the long-term survival chances of Homo sapiens and our evolutionary descendants.

Five spacecraft — Pioneers 10 and 11, Voyagers 1 and 2, and New Horizons — are currently leaving the solar system for interstellar space. However, they will cease to function many millennia before they approach another star, should they ever get to one at all.

Clearly, if starships are to ever become a practical reality, they will need to be based on far more energetic propulsion technologies than the chemical rockets and gravitational sling shots past giant planets that we use currently.

To reach a nearby star on a timescale of decades rather than millennia, a spacecraft would have to travel at a significant fraction — ideally about 10% — of the speed of light (the Voyager probes are traveling at about 0.005%). Such speeds are certainly possible in principle — and we wouldn’t have to invent new physics such as “warp drives,” a hypothetical propulsion technology to travel faster than light, or “wormholes” in space, as portrayed in the movie Interstellar.

Top rocket-design contenders

building-a-starship-6
An artist’s conception of the proposed Project Orion spacecraft powered by nuclear propulsion. Image Credit: NASA

Over the years, scientists have worked out a number of propulsion designs that might be able to accelerate space vehicles to these velocities (I outline several in this journal article). While many of these designs would be difficult to construct today, as nanotechnology progresses and scientific payloads can be made ever smaller and lighter, the energies required to accelerate them to the required velocities will decrease.

The most well thought through interstellar propulsion concept is the nuclear rocket, which would use the energy released when fusing together or splitting up atomic nuclei for propulsion.

Spacecraft using “light-sails” pushed by lasers based in the solar system are also a possibility. However, for scientifically useful payloads this would probably require lasers concentrating more power than the current electrical generating capacity of the entire world. We would probably need to construct vast solar arrays in space to gather the necessary energy from the sun to power these lasers.

Another proposed design is an antimatter rocket. Every sub-atomic particle has an antimatter companion that is virtually identical to itself, but with the opposite charge. When a particle and its antiparticle meet, they annihilate each other while releasing a huge amount of energy that could be used for propulsion. However, we currently cannot produce and store enough antimatter for this to work.

Artist’s view of a ramjet. The enormous electromagnetic field is invisible. Image Credit: NASA
Artist’s view of a ramjet. The enormous electromagnetic field is invisible. Image Credit: NASA

Interstellar ramjets, fusion rockets using enormous electromagnetic fields as a ram scoop to collect and compress interstellar hydrogen for a fusion drive are another possibility, but these would probably be yet harder to construct.

The most well developed proposal for rapid interstellar travel is the nuclear-fusion rocket concept described in the Project Daedalus study, conducted by the British Interplanetary Society in the late 1970s. This rocket would be capable of accelerating a 450 tonne payload to about 12% of the speed of light (which would get to the nearest star in about 36 years). The concept is currently being revisited and updated by the ongoing Project Icarus study. Unlike Daedalus, Icarus will be designed to slow down at its destination, permitting scientific instruments to make detailed measurements of the target star and planets.

All current starship concepts are designed to be built in space. They would be too large and potentially dangerous to launch from Earth. What’s more, to get enough energy to propel them we would need to learn to collect and manage large amounts of sunlight or mine rare nuclear isotopes for nuclear fusion from other planets. This means that interstellar space travel is only likely to become practical once humanity has become a spacefaring species.

The road to the stars therefore begins here — by gradually building up our capabilities. We need to progressively move on from the International Space Station to building outposts and colonies on the Moon and Mars (as already envisaged in the Global Exploration Roadmap). We then need to begin mining asteroids for raw materials. Then, perhaps sometime in the middle of the 22nd century, we may be prepared for the great leap across interstellar space and reap the scientific and cultural rewards that will result.The Conversation


Ian Crawford, Professor of Planetary Science and Astrobiology, Birkbeck, University of London

Disclosure Statement: Ian Crawford is a scientific consultant for Project Icarus.

This article was originally published on The Conversation. Read the original article.

Banner Image Credit: Shutterstock.com

‘Wait But Why’: Elon Musk’s Favorite Blog Makes Good Ideas Available to Everyone, With Cartoons

0

“The more 1000% authentic you are, the more you have monopoly over what you’re doing. Just be your weird self, and just be all of that. Either the world loves it, and there is something great there, or it doesn’t.”—Andrew Finn, cofounder of Wait But Why

If you remove the empty space inside the atoms of everyone in the world, humanity would fit inside a single M&M. Good to know! More importantly, if you bought a pizza using the collective net worth of every person in the world, the pizza would be big enough to cover the African nation of Niger. These are important facts, and just a few of the many shared with readers of Wait But Why, a popular cartoon-based science blog that’s proving thoughtful long-form content can stand out in the noisiness of today’s Internet.

wait-but-why-101

To date, the site has earned over 150 million page views, with articles earning hundreds of thousands of shares—a ridiculous feat when you consider their most popular posts stretch well over 3,000 words. More incredibly, the site is resonating with a young audience (66% of its readers are under 35) and is debunking the myth that digital media must conform to a shrinking millennial attention span.

With a playfully casual writing style at the heart of their success, Wait But Why’s articles explain complex topics such as artificial intelligence, cosmology, and aerospace in a refreshingly irreverent way. Notable readers include Facebook cofounder Dustin Moskovitz, former Twitter CEO Evan Williams, actor Rainn Wilson, and most famously Elon Musk (of Elon Musk fame).

The popular article “The AI Revolutions: The Road to Superintelligence” is cited as one of the most widely read pieces of writing on the topics of accelerating technological change since Ray Kurzweil’s bestselling book The Singularity Is Near. The article hasn’t gone unnoticed by Dr. Kurzweil, and in an email conversation with Singularity Hub, Kurzweil’s staff shared that he “enjoyed the article and thought the ideas were very clearly and entertainingly expressed.”

That article reached 2.5 million readers and eventually caught the attention of Elon Musk, who reached out to the writer—and what happened next reads like something right out of Charlie and the Chocolate Factory.

Almost three years ago, Wait But Why was cofounded by the two-man “best friend” team Tim Urban and Andrew Finn. They had cofounded a successful test prep company a decade ago, which now finances the website. During that company’s development—Urban, the cartoonist-author behind the blog, honed his minimalist drawing skills with a side project called Underneath the Turban. Finn shares, “Tim kept this blog that he worked on when he was procrastinating, and one day he started drawing these stick figures. They were hilarious. I joked that he was an 8 out of 10 writer and a 10 out of 10 at drawing sh***y stick figure people.”

Those stick figure people were eventually reintroduced on Wait But Why in July 2013, when their first post, “7 Ways to Be Insufferable on Facebook,” earned 500,000 clicks in the first month. When their next post, “Why Generation Y Yuppies Are Unhappy,” earned 6 million readers just three months later, Wait But Why had the audience momentum to explore more complex topics.

Finn describes how Wait But Why first transitioned into more science-minded topics. “Tim’s first attempt at writing about nerdy science stuff was the Fermi Paradox article. The audience response was way bigger than we expected, and after a while we started getting emails from readers saying, ‘Hey, you should look into this AI stuff.’ These were topics that Tim and I were naturally interested in, so eventually he researched and wrote the AI post.”

When Elon Musk came across the article, he Tweeted that it was a “good primer on the exponential advancement of technology, particularly AI.” Musk—normally an interview hermit—then spent 10 hours over four months with Urban, resulting in an epic (by Internet writing standards) 90,000 word article. That article, written over four parts, is one of the most comprehensive summaries of Elon Musk’s ambitions and has been read by 3 million readers to date.

wait-but-why-51

Industrial super-magnates and quirky cartoon blogs don’t usually hang out much. Describing on his podcast why Musk may have invested so much time, Urban explains:

“He never said, ‘Will you write this for me.’ Musk is obsessed with accuracy, and I think that one thing that is frustrating for him is messaging. He doesn’t do that many interviews, and when he does, he doesn’t have three hours to really explain what he wants. He didn’t want another 600-word article with two quotes and a Tweet of his taken out of context. People take his words out of context all the time, so I think it clicked when he read my AI post. I think he said, ‘A post like that—unusually long, but really thorough and something in a voice that people will actually read—would be really helpful. I think that’s what he wanted, so that’s what got him to reach out.”

Musk was given no creative oversight on the project and was trusting of Urban’s commitment to accuracy. To go from procrastinating Internet cartoonist to trusted confidant of this generation’s closest thing to a superhero speaks to the creative talents displayed in Urban’s work.

The well-known science writer Steven Kotler describes his own writing process this way: “I’ve written nine books. All share one thing in common: at some point during their writing, I ended up on the ground, sobbing, shouting, and punching the floor.” Readers of Wait But Why get the sense that Urban is a floor puncher as well. It’s not uncommon for Urban to share visions of his creative process in a humorously anguished way:

“Let’s back up a couple hours to midnight, when I was staring at my computer at a half-completed post that I hated. The thing about bad Wait But Why posts is they don’t like to reveal themselves right away—they like to disguise themselves as good posts until I work on them for a long time, and then they’re suddenly like, ‘Oh, btw, I’m a very bad post.’”

“The reason Wait But Why is good is Tim will get the post to the point where an article is an A, and then he will put 80 more hours on it and pull three all-nighters in a row. He’s a great natural writer, but ultimately he also works really hard,” says Finn.

Next month Urban will be on the main stage at TED, which could elevate the brand to new heights with an older audience. Finn mentions that upcoming topics for the site could include Bitcoin and CRISPR/Cas9, a genetic engineering tool that’s turning the life sciences upside down.

The ultimate success of Wait But Why is that it’s solving what writer and philosopher Alain de Botton calls the “grand challenge of our age: that we need to take the good ideas and repackage them for everyone.” For casual readers and the philosophically inclined, Wait But Why is Disneyland for knowledge-hungry minds. It’s quirky and weird. Your mom may find it offensive, but beneath the humor is a thoughtfully researched collection of thoughts on science, technology, philosophy, personal development, and what a quadrillion gummy candies stacked together looks like. Happy reading.

Images courtesy of Wait But Why

The Internet Allowed Us to Learn Anything—VR Will Let Us Experience Everything

0

I have something to admit—to this day, I’m in awe of Wikipedia. Humanity has created a massive repository of our knowledge available for free to anyone with an Internet connection. All of our presidents and kings, theories and discoveries, just waiting to be read about and discovered. About once a month I’ll lose an afternoon to some obscure topic.

http://imgs.xkcd.com/comics/the_problem_with_wikipedia.png

It’s not just Wikipedia, though. The Internet has liberated information from the constraints of the physical world and essentially made the sharing of information free and unlimited for everyone. From communicating with friends on free Skype calls to taking university-level classes on Coursera and Udacity, our current access and connectivity dwarfs anything we’ve seen before.

Sometimes it’s hard to remember what an astounding leap we’ve made in our ability to share information. Reading books used to be the domain of only the privileged elite, while long-distance communication was either impossible or prohibitively expensive. Now both are cheap, convenient, and nearly instantaneous.

By democratizing the availability of information, the Internet has massively evened the playing field around the world by allowing anyone to contribute and learn from the global community.

The problem with the Internet is that while it is a fantastic tool for spreading information, sometimes information without experience can lose its impact. Massive open online courses have fantastic content, yet a very low percentage of students end up finishing them. It’s great to see my friend’s posts on Instagram and Snapchat, but nothing beats being together in person. And no matter how many times I’ve read about the Apollo 11 mission, I’ve never taken a step on the moon.

But that’s all going to change. Just as the Internet and smartphones have enabled the rapid and cheap sharing of information, virtual reality will be able to provide the same for experiences. That means that just as we can read, listen to, and watch videos of anything we want today, soon we’ll be able to experience stunning lifelike simulations in virtual reality.

And just as the democratization of information reshaped society, this is going to have a massive impact on the way we work, live, and play.

The Teleportation Device

By now, you’ve probably heard about the virtual reality resurgence led by Oculus. Virtual reality is an extremely hot field, with hundreds of millions of dollars of investment and basically every big name technology or media company getting in on the VR gold rush.

And if you’ve met VR true believers, you know the near fanatical interest they have in VR.

But why? What is it about these goofy ski goggles that has so thoroughly captured the hearts and minds of technologists across the globe?

It all boils down to one word: presence. Presence is the phenomenon that occurs when your brain is convinced, on a fundamental and subconscious level, that the VR simulation you are experiencing is real.

This doesn’t mean that you forget you’re in a simulation. But it does mean that when you ride a VR roller coaster, you feel it.

The Internet made the world smaller. VR is about to make it exhilarating.

Want to watch the Super Bowl from the fifty-yard line? Be on stage at your favorite concert? Or just visit and explore a faraway country? Well, that’s exactly what Mark Zuckerberg wants you to be able to do on the Oculus Rift.

Welcome to virtual reality in 2016. You can do all of this today, and it’s only going to get better. Lifelike, immersive, and available to anyone with a VR headset. Using 360-degree video and light field technology, we can now capture real-life events and distribute them to anyone, anywhere.

Soon you’ll be able to explore every city, watch every sports game, and explore the universe in VR. Content plus presence is an extremely potent combination.

But everything is more fun with a friend. Luckily, you’ll never have to be alone in VR.

The Magic Mirror

Part of the great sadness of the modern world is being able to text, call, and video chat with friends and family from all over the planet but never truly feel like you’re with them. Sometimes this ghost of a connection can paradoxically be worse than nothing, being just realistic enough to make you miss your loved ones without feeling the true warmth of their presence.

We now know that the very magic of virtual reality comes from presence. Multi-user virtual reality can enable a specific kind of phenomenon—social presence.

Just as presence in virtual reality occurs when your brain believes on a fundamental level that the scene you are experiencing is real, social presence can convince your brain to believe that the other people in the VR experience are really there with you.

That means that all of those experiences we’re excited about in VR, we’ll be able to experience with anyone we choose as if we’re all really there. An average Tuesday night in the VR future could include dropping into a professional conference with a coworker of yours, watching a football game with your father on the other side of the country, then hopping into a VR concert with your best friend from high school—all without leaving the house.

Now, nothing is going to replace spending quality time with the people around you, but technology at its best expands the opportunities for human creativity and communication to flourish—and VR is a massive step forward for this.

The Next Revolution

The rise of the Internet was one of the most profound developments of the past century. The Internet famously allowed the futurist Ray Kurzweil to conclude that “A kid in Africa has access to more information than the president of the United States did 15 years ago.” Well, pretty soon, that kid is going to have more opportunity for experiences too.

Pretty soon, we’ll be learning in virtual-reality classrooms, shopping at virtual-reality stores, and even working in virtual-reality offices.

We can only begin to speculate on the long-term consequences of this. How are cities affected when the VR office becomes the standard? How will the entertainment industry respond to live-streamed VR sports and concerts? Can we finally create a digital university that surpasses the quality of our oldest and grandest learning institutions?

Sometimes this all seems hard to fathom. Could we really see these massive changes coming in just a few short years?

When I consider the nearness of these changes, I keep returning to the Internet, to Wikipedia—one of the greatest creations of the Internet and the democratization of information.

After that, it doesn’t seem so unlikely after all.

Image Credit: Sony Project Morpheus/Marco Verch; xkcd

Want a Life of Purpose? Find a Career Dedicated to a Cause Beyond Yourself

0

Isaac Castro: Biomedical Technology, Entrepreneur
Graduate Studies Program 2015 Graduate
Bogotá, Colombia


True passion is tough to fake. Name it a calling or a career, when your life’s work is dedicated to a cause beyond yourself; your motivation is of a different breed.

It’s tangible in how these people talk. Even the basic question, “What do you do?” lights them up. The godfather of flow, Mihaly Csikszentmihalyi, knows this catalyzing power of intrinsic motivation well, as he writes in his famous book Flow,

“The autotelic experience, or flow, lifts the course of life to a different level. When experience is intrinsically rewarding life is justified in the present, instead of held hostage to a hypothetical future gain.”

When I asked Isaac Castro about his work, while he was in the heat of the 2015 Graduate Studies Program (GSP) this summer, there was no question in my mind; Isaac’s work triggered an intense flow state in him. Not only that, but his work was becoming something much greater than the sum of its parts.

DSC_0174
Isaac Castro

Isaac, now a full time social entrepreneur, first began making waves at Siemens Healthcare, where he was steadfastly committed to spreading innovation throughout the organization.

As regional change manager for South America, Isaac led the execution of a new plan to change the work philosophy for several hundred employees in the region and won Siemens’ Young Innovator of 2011. Shortly after, he was invited to work at their global headquarters in Germany as an innovation project manager, contributing to a portfolio of innovation projects within their clinical product pipeline.

Just a year into this new role, MIT Technology Review caught word of Isaac’s work at Siemens on a new concept called Adaptable—a patient table designed to increase precision and effectiveness for radiation therapy—and listed Isaac as one of 2013’s top 10 young innovators under 35.

By 2014, Isaac joined the World Economic Forum’s Global Shapers Community, a global network of future leaders under 30. At Isaac’s home chapter in Colombia, the community is focusing on projects to foster peace within the country—and their efforts are bold.

DSC_0201

Most recently, a group of 10 global shapers, 10 former members of FARC—Colombia’s largest guerrilla group—and 20 children who were victims of conflict, traveled into the mountains for a week, to live together under one roof, and set the foundation for peace.

Though Isaac’s track at Siemens was on the up before flying to Silicon Valley to begin the 10-week GSP, he felt a deep-seated desire to build and launch something of his own, and left his stable job.

Now, post-GSP, Isaac and team are working full speed on their startup, Emerge, which they launched during the program. The team has prototyped their first product—a device that augments digital communication by simulating and transmitting human touch from one person to another.

In the next five years, the team’s goal is to incorporate the sense of touch into daily communication. Their moonshot? Develop wireless brain-to-brain communication that connects individuals through all of their senses.

“Imagine how emotionally powerful it would be to send a virtual hug to a loved one, just by thinking about them,” Isaac said with a huge, ambitious smile.

Just this month, Isaac received an invitation to attend the World Economic Forum’s 2016 annual meeting in Davos. With the theme this year focused on exploring how exponential technologies will change the way we understand our world and how to prepare future societies for these changes, it’s no surprise he was asked to attend.

“My gratitude to Singularity University is already beyond words. GSP has boosted my journey to prepare the world for the accelerating change currently happening, and still to come.”

DSC_0195

I would go as far as saying that Isaac’s own efforts are also on a fast track, as is the growing impact of his dedicated work.


Connect with me on twitter @DigitAlison or @SingularityHub, and tell me what inspires your work.
You can follow the full series here or learn more about Singularity University’s (newly renamed) Global Solutions Program here.

Photography shot by: Alison Berman

Subscribe to Exponential Thinkers weekly newsletter to receive each new story and additional curated content. 

Singularity University Holiday Letter: 2015 Was Good, 2016 Will Be Great

0

Happy Holidays!

As we reflect on 2015 and look forward to a New Year, I wanted to share updates on our progress here at Singularity University, our outlook for 2016, and to express our gratitude for your role in our community.

2015 was an important year for SU. We began building foundational blocks for scaling programs and impact, delivering stronger programs, and expanding and empowering our ecosystem of partners, alumni, and other inspired solvers. There were great successes in many key areas and, as you might expect with a mission as big as ours, learning and growth. We have an optimistic worldview and inside of this, we will always strive to make things better. Together, we can positively impact billions.   

2015 Highlights

Introducing SingularityU Global

The coming year will bring many new alumni chapters, in-country Summits, and local salons and gatherings organized and delivered by local alumni around the world. We developed infrastructure and a new team to support our global community members. This team will support alumni to share their passions, engage with local communities, and catalyze impact.

GSP

Another successful GSP wrapped, and we added a new cohort of accomplished participants to the SU community. This year, GSP was 100% free for participants due to a generous grant from Google and numerous Global Impact Competition sponsors. This was important to our efforts to increase access for qualified candidates from diverse backgrounds. We’re proud that 2015’s GSP had participants from 45 countries and was comprised of 53% women. We also announced changes for GSP in 2016, which will be known as the Global Solutions Program. We added two additional Global Grand Challenge focus areas: Disaster Resilience and Prosperity.

The First SU Labs Startup Accelerator

For the first time in SU history, we hosted the SU Labs Startup Accelerator. After 10 weeks on campus, seven great startups presented at Demo Day and blew us away with their technologies, progress, passion, and hustle. Teams received $100,000 to further their startups. Our accelerator structure is inclusive of for-profit and nonprofit startups with a mission.

Global Recognition for SU Companies

SU companies are being recognized globally with news and awards this year. SU Labs Accelerator startup Be My Eyes was highlighted by Popular Science in their Best of What’s New in 2015. SU Startup Network nonprofit member Calorie Cloud was endorsed by UNICEF ambassador P!nk. Disaster Mesh and iHelmet were finalists for the Verizon Powerful Answers Awards. Matternet, 1QBit and Blue Oak Resources were on the World Economic Forum’s Tech Pioneers of 2015 list. The Made In Space team was one of Forbes “30 Under 30” to watch and has some really big news coming out soon—stay tuned.

Impact Challenge

In partnership with Lt. Governor Gavin Newsom, our 2015 Impact Challenge focused on solving the California drought. SuntoWater won an Entrepreneurs-in-Residence post at SU Labs and a grant to further develop their solar-powered appliance that produces potable water from the air. Also notable was our corporate partner Ingersoll Rand’s team who was inspired by a previous SU program, pledged to use corporate resources for positive impact, and achieved finalist standing in the challenge.

Expanding Executive Programs

SU Executive Programs remain consistently sold out, and we are expanding to meet demand and make programs more inclusive of geography, gender, and industry. Attendees came from 52 countries this year, bringing the all-time total to 78 countries so far.

New Development Partners

We’re thrilled to welcome new Development Partners Yunus Social Business, Amnesty International, and the World Research Institute to our partnership ecosystem.

Partnering for Impact

Lowe’s Innovation Lab announced a partnership with Made In Space to launch the first-ever commercial-grade 3D printer to the International Space Station. This SU Labs collaboration highlights the creative ways our community members come together to propel humanity above and beyond, literally.

Congratulating Our Faculty

We are proud of our faculty and happy to see their work honored. Policy, Law, and Ethics Chair Marc Goodman’s Future Crimes was named Best Business Book for 2015 by Amazon and made the Washington Post’s 10 Best Books for 2015. SU Global Ambassador Salim Ismail’s Exponential Organizations was highlighted as a Top Business Book of 2015 by Fortune. Medicine & Neuroscience Chair Daniel Kraft became an Aspen Institute Health Innovator Fellow and was named Biggest Digital Health Evangelist by Rock Health.

Looking Toward 2016 and Beyond

Singularity University exists to inspire positive impact in the world. We want to share with you some of the steps we are taking to empower our community.

As a learning organization, we are constantly growing and working on the best ways to incorporate our teachings. We aspire to be and are working hard at becoming an exponential organization and are investing in technology and scalable processes to do so. We instituted dashboards and company-wide OKRs (Objective Key Results) for the first time this year. We are investing in key management positions with experience in these and other areas vital to helping us achieve our mission.

We believe strongly in our programs and the transformative experience they provide to participants. For that reason, we want to ensure they reach as many people as possible. In order to better scale in 2016, we’re rolling out Global Community and Digital Education programming. We have a lot of work to do in these areas to meet our bold, audacious goals, but we are making good progress.

We have several Global Summits being planned for 2016, including opportunities in India, Germany, Chile, the Netherlands, and New Zealand. We are also working closely with partners in Eindhoven, Netherlands to build out our first global location, and developing guidelines and frameworks for alumni to run several other programs at the local level. As our community develops local operations, we are putting in place the infrastructure to keep local SU communities connected as one global community.

On the digital side, we’ve run two successful digital education pilots and will be rolling out the program in 2016. In the future, alumni will have the opportunity to bring digital education programs to their community. We also continue to expand free content with several initiatives, including SU Videos, which feature short-form content from across our live programs. We launched our first original digital series, “Ask an Expert,” and streamed various events in their entirety including Exponential Medicine, the Future of Series, the Impact Challenge, and others. We want people all around the world to have access to Singularity University ideas, experts, and content. We have posted many of our lectures and content on our SU Videos page, YouTube Channel, and through Singularity Hub, and we expect to develop digitally friendly content from our Digital Education initiative that can also be shared more broadly. Singularity Hub is tracking to have its most-trafficked year ever, which is a testament to the strong content, broadening interest in our expertise areas, thought leadership, and community connection.

Founder Updates

Ray Kurzweil just gave the opening keynote for the Nobel Prize proceedings in Gothenburg, Sweden. He gave 54 lectures in 8 countries this year, and won the 2015 Technical GRAMMY Award for Outstanding Achievements in the Field of Music Technology. He is currently working on three books including a novel titled, Danielle: Chronicles of a Super-Heroine, a companion nonfiction book, A Chronicle of Ideas, a book-length glossary to the novel, and The Singularity is Nearer, a sequel to his influential 2005 book, The Singularity Is Near.  

Peter Diamandis released his new book Bold as another best-seller of the year and is increasingly known as a thought leader for technology and disruption around the world. He has also helped us build extraordinary relationships with some of the biggest companies and notable thought leaders across every industry. We anticipate Peter will bring many more incredible ideas and connections in 2016.

Our Impact

At the end of the day, the impact we make in the world is key to our success in achieving our mission at Singularity University, which is why in 2012 we became a Benefit Corporation. Accurately measuring SU’s successes and improving direct and indirect impact metrics are priorities. Our 2015 Impact & Benefit Report highlights some of our accomplishments. SU’s strength stems from creating transformational change; fostering an abundant mindset that catalyzes people and ideas into action. Today, that transformation is best measured by SU’s indirect impact through the SU community’s initiatives. The community is creating new organizations, innovating on existing ones, creating new research and development or new policies, mobilizing resources and advocating awareness and education. Though it is often hard to gather and measure these success stories from the greater community, we are proud of the great progress to date. Collectively, the reach of our community members is making strides towards positively impacting a billion people.

Our community is made up of entrepreneurs, corporate partners, governments, development organizations, investors, and universities. When we bring these groups together with a mindset of solving Global Grand Challenges with exponential technologies, we have a unique ability to cross-pollinate and accelerate our positive impact. We are also developing a digital platform to support a global innovation ecosystem, leveraging multiple stakeholders from the SU community to enable collaboration and find solutions locally and globally. We do not pretend to know the exact playbook, but we are excited for the SU community to co-create it with us. This year, we began to see the fruits of partnerships between different members of that ecosystem, from the Impact Challenge by corporate member Lowe’s to our partnership with the Lt. Governor of California Gavin Newsom to address the drought problem in California, to collaborations between our development partners and our startup companies.

This year we have begun to integrate social impact into our external and internal activities, including creating “business for good” sessions in our Executive Programs and hosting over 20 Global Impact Competitions. In addition to our Benefit Corporation status, we also started the process toward B Lab certification, which will help us maintain external checks and balances through third party audits of our social and environmental performance..

As a growing organization, we are always improving and maturing. We are joined in this mission together and we welcome questions, feedback, and ideas from our community. One question we like to ask is, “Are we leaving the world better off by our actions?” Our answer is yes. The journey is only at the beginning and we are glad that you’re all part of our community. If you want to get in touch with us, please email info@singularityu.org.

And Now for Fun…

Ray Kurzweil on the best gift he ever received:

At the age of eight, I was introduced to the Tom Swift, Jr. series of books. They taught me that the right idea has the power to overcome seemingly overwhelming challenges.  

What Peter Diamandis is hoping for in his stocking:

Simple… 4 small requests: 1) $100 billion in investable cash with which to commercialize the inner solar system. 2) Nanobots with which to connect my brain to the cloud. 3) A pharmaceutical solution to aging 4) Instructions for cloning myself 10 times.

Would CEO Rob Nail let Aldebaran robot “Pepper” babysit his child?

Absolutely! However, Beckett has already proven himself capable of loving the NAO robot to death and showing it to be a very expensive toy for a toddler within about 15 minutes. In the case of Pepper, I think we would have to get a dog as well. There is an old automation joke, that the factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment. In this case, the robot will feed the dog and the dog is to keep Beckett away from playing with Pepper. Until the platform is a little more robust, a toddler will run circles around it and surely destroy it on the first day!

Chief Impact Officer Emeline Paat-Dahlstrom’s #1 wish for impact in 2016:

Somebody within the SU community discovers a breakthrough for cancer, or longevity, or world peace! But I secretly also wish somebody would find a disruptive form of space travel that would allow everyone to see the beauty of space and cherish, preserve, and protect our home planet we call Earth.

What Policy, Law, and Ethics Chair Marc Goodman recommends for holiday shopping to prevent cybercrime:

Don’t go online, don’t use the Internet, don’t use devices that contain batteries or plug into electrical outlets. Also, don’t use credit card, debit card, bitcoin, or company cash to buy anything. 😉 [Editor’s note: Bah, humbug.]

Medicine and Neuroscience Chair Daniel Kraft’s top tip for staying healthy this holiday season:

Don’t touch the fruitcake.

Energy and Environmental Systems Chair Gregg Maryniak’s New Year’s resolution:

Complete the business plan for the first B&B in space, Maureen’s Lunar Cafe, named for his wife. Tagline: Great Food—No Atmosphere!

Networks and Computing Chair Brad Templeton’s prediction on what year you could find an autonomous car under the tree:

The full sized car won’t be something to buy for a long time. But Santa might send one to pick you up and take you places like an Uber in 2019. A toy car is a more interesting question which Santa might be able to buy around the same time.

Adjunct Faculty Darlene Damm’s gift she’s most excited about giving:

Her 94-year-old grandmother’s favorite perfume (Red Door). Every day, she still gets completely dressed, does her hair and make-up and puts on her perfume. I think many people her age would have given up on such things, but I see it as her way of still showing up, still being part of society and the world. I love to hear that she is running out of perfume, as I know that even though she is getting very old, she hasn’t given up yet.

Artificial Intelligence and Robotics Co-Chair Neil Jacobson’s New Year’s resolution:

Fifty-minute level 3 yoga sessions at least three times per week. (He’s currently at two, so a 50% increase.)

EVP of Education and Innovation Carin Watson’s best tip for teaching impact to kids:

Build empathy through exposure to the problem and stories, then take action. It could be a clothing drive, toys for tots, or donating through your religious organization. The most important lesson to teach is that it’s not enough to feel sad or even to care deeply. You must do something about it, no matter how small. And ideally, not just during the holidays but year-round.

What Biotechnology and Bioinformatics Chair Raymond McCauley is most looking forward to this holiday season:

Sleep.

What VP of Faculty and Curriculum Nicole Wilson is most excited for in 2016:

Growing collaboration among and between faculty and staff, and setting a new bar for not just what we teach, but how we teach it.

SU Labs Managing Director Pascal Finette’s best gift ever received:

His wife. She was born on Christmas Day.

Happy Holidays!


Image Credit: Shutterstock

Why We Need Government to Evolve as Fast as Technology

From deep learning to gene editing, the world of technology is moving fast. But at Singularity University, we believe amazing tech is only half the equation. Equally important is how we use technology. The most pressing question: How can technology address and perhaps solve the biggest challenges facing humanity?

We call these the Global Grand Challenges, and they include energy, environment, food, water, disaster resilience, space, security, health, learning, and prosperity.

And we recently launched a new Global Grand Challenge: governance.

We believe it’s not only possible to solve governance, but that doing so is essential to solving all other GGCs. Like other Grand Challenges, with the right social and political will we could already solve for governance with existing tools and capacities—even if technology stood still. However, in a world of exponentially changing technology, we are presented with both new opportunities to overcome human limitations and entirely new and unpredictable challenges.

But first, what does the end-state objective for governance look like to us?

To create a world with equitable participation of all people in formal and societal governance in accordance with principles of justice and individual rights; free from discrimination and identity-based prejudices; and able to meet the needs of an exponentially changing world.

Whether it’s lack of trust, corruption, or not being fit for purpose, evidence of poor governance can be found around the world. A recent Pew survey shows that only 19 percent of Americans say they can trust government “always” or “most of the time”—which is close to the lowest level in the past 60 years.

The World Economic Forum estimates the cost of corruption is $2.6 trillion—more than 5 percent of global GDP—with over $1 trillion paid in bribes each year.

Beyond the numbers, the real “face” of corruption is the girl who is denied schooling because funds for building schools, paying teachers, and other needs are diverted elsewhere. Or the mother who cannot access basic health care for her children because governmental funds are diverted.

An illustrative example of the challenge to being fit-for-purpose is the United Nations Climate Change Conference 2015 currently underway in Paris.

The stakes are high—as Newsweek magazine put it, “leaders and high-level officials from 196 parties have 12 days to reach an accord that could save the planet.” And yet, although 97 percent of climate scientists insist climate change is real and caused by human actions, significant percentages of people around the world are still in denial, with government policies not reflecting the severity and magnitude of the global consequences.

government-fast-as-technology-3Are national and global governance bodies fit for the task of creating the right forward-thinking policies to prevent a global catastrophe?

As we increasingly shift into a globally connected world—environmentally, economically, socially, technologically—legacy governance structures based on nation states may no longer be able to meet emerging challenges. Both formal and nonformal governance structures will struggle to keep up with the exponential and accelerating pace of change.

Examples abound of new technologies that are already straining governance structures: drones for civilian use, self-driving cars, genetic engineering, crowdfunding, artificial intelligence, cybercrime, and others.

We don’t need policies that lag behind but policies that rapidly adapt and enable innovation, equity, and safe regulation. This applies equally to all organizational governance structures—from large corporations to small startups.

While technology is posing new challenges to governance, it is also rapidly evolving new approaches to governance.

Blockchain (the technology underpinning digital currencies such as Bitcoin) can be applied to most any contract, increasing transparency, accountability, and efficiency. Virtual reality can be used to increase empathy and “feel the future” that is likely to result from policy options. New forms of direct democracy and consensus decision-making are emerging such as liquid democracy, adhocracy, Loomio, and holacracy.

Current governance structures were developed over thousands of years, and while they may have been suitable for a slow-changing and parochial world, they are ripe for disruption. While technology changes at exponential rates, governance tends to change at linear rates. This discrepancy must be rectified to ensure that humanity not only avoids a range of catastrophic consequences, but also enables innovation and creates an equitable world where all Global Grand Challenges are solved.

Adaptability: The Entrepreneur’s Way

Einstein Ntim: Entrepreneur 
Graduate Studies Program 2015 Graduate
Ghana, United Kingdom


“Life can only be understood backwards; but it must be lived forwards.”
―Søren Kierkegaard

Great leaders and entrepreneurs are not born overnight—our lives are ever-unfolding, a welding together of many pieces, twists, and turns.

In these journeys, the quality of adaptability stands out more in some than others—where regardless of circumstance, luck, or fate, it somehow all ties together in the end. And though we all have distinct chapters in our journeys, the way we choose to tell them varies greatly from one person to the next.

For Einstein Ntim, many of the foundational threads of who he is—once a professional rugby player in the UK, a published poet, a London School of Economics graduate now turned entrepreneur—would make a great story.

But those are merely things Einstein has collected along the way. They don’t paint the full picture; they don’t make Einstein, Einstein.

DSC_0644
Einstein Ntim

Einstein sees and experiences the deeper notes of life. He is insightful, peaceful yet pointed, and gives the sense that he’s lived many lives—and he has.

After sitting down with Einstein to learn about his own journey, I walked away with one lingering thought: in a world where one of our earliest qualities—adaptability—is in hot demand, what better characteristic could we hope for in our leaders of tomorrow?

But the journeys of entrepreneurs are unknown to many.

So what can adaptability look like?


Einstein was born and raised in Ghana as the only child of his two parents. His upbringing was comfortable, even plush in many ways. But at the age of eight, Einstein’s father was becoming involved in the political scene in Ghana against the ruling party of the last twenty years, which had been a military dictatorship. “It wasn’t as severe as some of the other nations you hear about. Ghanaians are quite peaceful people,” Einstein told me.

As the next election neared and tension surrounding the events began to heighten, Einstein’s parents made the difficult choice to have him live in a remote location for two years, where his life become vastly different than it had been before—lacking electricity, running water, and most significantly, communication with his family and friends.

Einstein recalls, “This is where, I suppose, one of my journeys started.”

But Einstein has had many journeys.

DSC_0692

He returned home two years later and moved to the UK with his family for a new start. Though Einstein initially began his time in the UK enrolled in one of the worst grade schools in the country, he made the most of things—earning himself a scholarship to a top private school for his final two years of school prior to University, and later graduated from the London School of Economics (LSE) with a degree in economics and social policy. During school Einstein focused on entering the financial industry and interned at Deutsche Bank, UBS, and State Street.

Upon graduating from LSE, he traveled to India for work and saw a level of disparity that jolted him.

“I grew up partially in Ghana, and I’ve seen difficulties, but I hadn’t seen such disparate difficulty and inequality.”

While in India, Einstein also had his first glimpse of the inner workings of a social enterprise while working with a woman who had founded an ambulance service she had successfully grown from scratch—it was the beginning of an internal pivot toward making positive social impact a career directive.

It was later while working in industrial automation in China during the British Council Ambassadorship Program, that Einstein discovered Singularity University’s Graduate Studies Program, through a mentor and alumnus. After looking into GSP further, and inspired by Peter Diamandis, Einstein decided to apply.

DSC_0698

During GSP, Einstein and his team launched Bloomer Technology, a product to address cardiovascular disease in women, which often goes undetected in the early stages, and currently causes more deaths in women than cancer, HIV and malaria combined.

The product integrates powerful sensors into clothing to give women greater insight into the day-to-day inner-workings of their bodies to provide insights at the early warning signals. The team ultimately wants to expand Bloomer to other diseases, like cancer, and take the technology to the people who most need it, which is no surprise after hearing Einstein talk about the global perspective he holds: the world needs to progress in totality, not just regionally.

“If we’re only ever as strong as our weakest link, the fact that one of the weaker links is not in as strong a place means we’re lacking as a species, as the human race altogether.”

Having completed GSP, Einstein found himself at another crossroads in his journey—deciding whether to begin the PhD program at Harvard University he was accepted to, or defer, and focus entirely on Bloomer Technology.


At the end of our conversation, Einstein reached into his bag and pulled out a copy of, A New Way, his published book of poetry, and read a poem titled, Can the Heart Think, which was inspired by a philosophy course Einstein took in China.

“The professor was telling us the heart is almost like a thinking organ, and that it’s got a bigger force than even our brains.”

This beautiful take on the heart triggered Einstein to reflect on the times in his life when he too put his heart fully into whatever it was he was doing—rugby, work, academics, the army—and how doing so transformed his experience of that very thing.

So, what makes an adaptable leader? Is it experiencing many walks of life? Or is it this quality of full-hearted dedication to any pursuit at hand? I suggest—both.

DSC_0703


Connect with me on twitter @DigitAlison or @SingularityHub, and tell me what inspires your work.
You can follow the full series here or learn more about Singularity University’s Graduate Studies Program.

Photography shot by: Alison Berman

Subscribe to Exponential Thinkers weekly newsletter to receive each new story and additional curated content. 

Argentina’s Plan to Grow a Culture of Innovation From the Classroom Up

Last week, the people of Argentina elected a new president.

Mauricio Macri, the mayor of Buenos Aires, billed himself as the candidate of change during his presidential run, even naming the alliance of parties supporting him, “Let’s Change.”

Recently, I had the opportunity to sit down with a member of Macri’s new cabinet, the Minister of Education, Esteban Bullrich, to discuss the future of learning.

Bullrich’s administration spent the past five years overhauling the educational system in Buenos Aires, working closely with Macri to implement widespread change to schools and open a direct, constructive dialogue between teachers and government officials.

Now, with Macri as president-elect, these educational reforms may be implemented nationally throughout Argentina.  

The State of Education in Argentina

With a stagnant economy, slow job creation, and high inflation, Argentina is currently facing some big challenges. But Minister Bullrich, looking to the future, believes innovation can be an engine of positive economic change—and that change begins in the classroom.

Today, Argentina’s educational system (like many, including the US) is largely built on subjects deemed important decades ago— and Argentine students are finding their education so irrelevant they are dropping out of school in droves.

In 2013, the dropout rate for both private and public universities in Argentina was a staggering 73%. One of the biggest reasons students drop out of secondary school, according to a recent survey, is “lack of interest.”

When Minister Bullrich and his team began implementing changes five years ago, he admits it was a bumpy ride. During his first two years in office, Bullrich experienced twenty teacher strikes.

The strikes stopped when he began handing out his personal cell phone number to parents and teachers instead of communicating through the unions. Since then, his administration has completely reworked the required curriculum for high schools and enrolled 90% of kids between the ages of three and five in the city in early education programs.

(Watch Esteban Bullrich (and others) discuss the Future of Learning below.)

Overcoming Fear

I asked Minister Bullrich to tell me about the biggest challenges he was facing in reforming the city’s educational system.

Fear, he said. Fear of change, in particular, has caused Argentina to fall behind in terms of innovation. He used a playful example to illustrate his point:

“We are discussing upgrades and updates to our car instead of building a spaceship. We need to build a spaceship, but we don’t want to leave the car behind…We might take out this old cassette deck from the car and put in our MP3, and it looks like a big change. But the truth is, it’s still a car with four wheels, and it goes on the road. It doesn’t fly. That’s why education policy is not flying.”

Today, change seems to be the only constant. Fearing change is like being strapped to a speeding train and digging your heels into the ground to try to stop it. The pace of technology isn’t slowing down, and those who refuse to keep up will, unfortunately, be left behind.

And that is precisely what Minister Bullrich wants to avoid.

Instead, he plans to help ease people’s fear by showing them what they can do by harnessing change. He says, “We need to use the strength of innovation within the classroom, within the school, to make kids—especially kids—lose that fear of change—because when they own the change, they’re part of the change.

This fear isn’t unique to Argentina. Many all around the world are wondering how to stay relevant and stay ahead of the curve. So, how is Buenos Aires building an educational system for the future?

Schools That Foster Innovation and Generate Creators

One of the major reforms enacted by Mayor Macri and Minister Bullrich is allowing secondary schools to define their own curriculum. While schools are still held to federal curriculum guidelines, they are not restricted by a city-wide curriculum and free to shape classes to the current needs of students, rather than teaching an archaic curriculum.

Further, as part of the reforms, every student in primary school is given access to an internet-connected computer, and a new free city-wide network allows students to access the internet from their homes as well.

Students are now being taught how to code in primary school, and once they reach secondary school they are required to study coding and entrepreneurship. Why entrepreneurship? So students lose their fear of uncertainty and of innovation, says Bullrich.

“We [must] build a system where there is no fear of change because there’s no fear of failure, because failure is not condemned. Failure is part of [the] innovation process and is incentivized. Failing is okay as long as you keep on trying.”

Bullrich sees a problem in how children are raised to fear failure because they equate failure with punishment. Even as a parent, he has to remind himself to practice what he preaches — teaching his own children that failing and trying again is part of everyday life.

Incentivizing Teachers to Be the Innovation Engine of Society

“If we want education to become the innovation engine of our society, we need teachers…to be the most innovative people [of all]—because they need to train innovators.”

Unfortunately, many teachers are scared of the changing landscape and resistant to change in their classrooms. Bullrich argues that the way to incentivize teachers to try new things is to reduce the cost of failure. He says they are flying Argentine teachers to Finland and Sweden to visit innovative schools and learn from them, and they come back with their minds blown.

Behind all of these reforms is a core mentality Bullrich believes is crucial for future Argentine leaders to grasp: “Nothing has to be accepted as given.”

Bullrich wants students and teachers to develop the courage to change things that aren’t working, to continue trying until they succeed, and to build the future they envision.

To get updates on Future of Learning posts, sign up here.

Image Credit: Shutterstock.com

Building the Maker Movement in Baghdad

0

Ali M. Ismail, Entrepreneur
Graduate Studies Program 2015 Graduate
Baghdad, Iraq


Ali Ismail had been patiently waiting in Baghdad for the arrival of a visa the US embassy was mailing him. However, as the first week of Singularity University’s Graduate Studies Program began, Ali was still in Baghdad. And he was done waiting.

He found the number of the DHL manager in Iraq, went to their warehouse, and got to the source of the delay. A few days, three flights, and over 36 hours of travel later, Ali arrived in San Francisco and made his way to Singularity University only one week late.

The post-war era in Iraq creates a lot of challenges, and a lot of opportunity.” –Ali

DSC_0762
Ali M. Ismail

A self-taught developer, Ali studied materials engineering in college. Though he began his career in media, he quickly realized he was on the wrong path. His ultimate passion was entrepreneurship—something he had already begun to explore in college.

In 2012, Ali cofounded the first maker space in Iraq (Fikraspace). Now, the largest maker community in the country, Fikraspace has hosted three events for entrepreneurs (called Startup Weekends) in Baghdad and two in Basra in the south.

Ali’s central motivation—to give young, aspiring Iraqi entrepreneurs opportunities for training, mentorship, and investment—has also been largely fueled by his own efforts pursuing those very things for himself.


Alison: What is the entrepreneurial and startup ecosystem like in Baghdad? 

Ali: I think Iraqis are very entrepreneurial. In the ‘90s, Iraq was under many UN sanctions. If you worked for the government, you couldn’t make much money, so most Iraqis started their own businesses. The culture is there, but it’s not the entrepreneurial ecosystem that’s here in Silicon Valley. It’s not as industrialized like it is here, and most of the businesses are different. Most are not in tech.

The tech ecosystem in Iraq really started about three years ago, as we were starting the maker space. Through the maker space, we organized large events such as Startup Weekends. We’ve organized three in Baghdad and two in Basra in the south. We got a lot of traction with young people because they like the idea of being their own bosses. We are one of the youngest nations in the world. Most of the population is 16 to 25 years old.

There are a lot of things that entrepreneurs can build, and there are a lot of untapped opportunities, even in the infrastructure. Almost nothing in Iraq is automated.

DSC_0732

Alison: What have been some of the challenges of bringing Startup Weekend to Baghdad?

Ali: There’s a gap before Startup Weekend and a gap after—before Startup Weekend it’s the skills, and after Startup Weekend it’s the investments.

So, we are giving free workshops before Startup Weekend in mobile programming, web programming, and design for young people. And after Startup Weekend, we are trying to set up mentorships and investment. We are planning to expand the maker space into a coworking space and eventually an accelerator in Iraq.

DSC_0737

Alison: Beyond cofounding the first maker space in Iraq and continuing to nurture the maker community, what is your source of inspiration?

Ali: When I was a kid, I wanted a place where I could learn from other people and also share what I’ve learned. This was the most difficult thing in Baghdad, in part because I didn’t have much access to the internet. Having a maker space would have been so great for me—so I started it at first to meet other people who shared my interests.

There are a lot of boot camps here in the US that provide skilled training for developers and designers, or that just provide human capital for startups. I really want to do that with our maker space. And we are doing it, but not at that large of a scale.

DSC_0760

Alison: How would you like entrepreneurship in Baghdad to evolve and improve?

Ali: I hope to have more people from outside the country come to Iraq—like investors, thought leaders, mentors—and also to send more people from Iraq to Silicon Valley.

I also hope the mindsets of some investors in Iraq will change. Many of them are mostly investing in established business models like restaurants, malls, or entertainment. They are not taking the risk to invest in innovation. I want this to change, to have more money invested in young people.

DSC_0719

This interview has been edited and condensed


Connect with me on twitter @DigitAlison or @SingularityHub, and tell me what inspired your work.
You can follow the full series here or learn more about Singularity University’s Graduate Studies Program.

Subscribe to Exponential Thinkers weekly newsletter to receive each new story and additional curated content. 

Photography shot by: Alison Berman

Exponential Medicine: The Most Detailed Snapshot of Human Health in History

0

Our bodies are extremely complex, interrelated, and ever-evolving patterns of information—from DNA to physiology to vital signs. But until modern times, most of that information was hidden from view. We didn’t know there was a glitch in the Matrix until something obvious tipped us off, and by then it was probably too late.

The theme of information has been front and center at Singularity University’s Exponential Medicine conference in recent years. Whether it was a talk about the declining cost and increasing quality of DNA sequencing or improving wearable sensors. The central quest was gathering and recording the information that describes our health top to bottom.

And these efforts are ongoing.

finest-snapshot-human-health-history-2
Brad Perkins MD MBA, Chief Medical Officer, Human Longevity Inc

Examples include comprehensive handheld health sensors (Scanadu), wearables to measure brain activity, and virtual assistants to pull it all together and suggest healthy behaviors (Lark). Human Longevity Inc’s Health Nucleus combines your genome, metabolome, microbiome, clinical imaging, and health history into a single comprehensive snapshot.

While giving a nod to continued data gathering efforts, however, this year’s Exponential Medicine also emphasized next steps. How can we make all this information useful? Health Nucleus, for example, isn’t making diagnoses yet—which is why Longevity Inc has a machine learning office in Silicon Valley and a stable of software engineers in Singapore.

Last year, I spoke to Raymond McCauley, SU’s biotech track chair, about Illumina’s latest genomic sequencer. The firm claimed that the sequencer, when cranking at full capacity, could transcribe a high-quality human genome for $1,000—a mark long awaited. McCauley said the cost would go lower still, but the cutting edge would now shift to figuring out how to make sense of all that information. Data science, as they say, would be sexy.

Jeremy Howard, founder and CEO of Enlitic
Jeremy Howard, Founder and CEO of Enlitic

And he was right. At one point during the conference, Daniel Kraft, founding executive director and chair of Exponential Medicine, asked the audience how things were going—anything that popped into their heads.

Someone stood up and said, “Jeremy Howard is a rockstar.”

Howard, a data scientist and previously president and chief science officer at Kaggle, spent the last year with his newly founded company, Enlitic, training deep learning software to diagnose cancer from medical images of the lungs. Howard says it’s now better than a panel of top radiologists at the task.

Deep learning may play a key role in analysis beyond medical images because it thrives on data. The more you feed it, the better it gets at finding patterns. We’ll need all the help we can get if we’re to begin making more practical connections between our genes and disease—and then doing something about it—because manually analyzing and comparing millions of three-billion-base-pair genomes isn’t remotely realistic.

Atul Butte MD PhD, Director of the Institute of Computational Health Sciences, UCSC
Atul Butte MD PhD, Director of the Institute of Computational Health Sciences, UCSC

But useful health data isn’t necessarily all new. There’s lots of free information few are mining. Atul Butte, for example, has long said there’s a wealth of untapped data in public health databases. And for relatively little money you can go online and order experiments—you can even order more than one and compare them—to check your hypotheses.

He’s already used that data to create businesses (some of which he’s sold) and says there’s more than enough to go around.

“The data’s sitting there, just waiting for you,” Butte said.

As we digitize intimate information, it isn’t just about analysis; it’s also about how we access, store, and share it. If we can’t see our own information or feel uncomfortable giving it to doctors or that powerful deep learning algorithm in the cloud—what’s the point?

Opposed but related: Information freedom and security are critical challenges.

MIT Media Lab’s Steven Keating gave a particularly stirring talk about his experience with brain cancer. He’d never have known about his brain tumor if he hadn’t volunteered to participate in a medical imaging study. A few years on, when he discovered the tumor had grown to the size of a tennis ball, he underwent brain surgery.

Steven Keating, Doctor Candidate and Research, MIT Media Lab
Steven Keating, Doctoral Candidate and Researcher, MIT Media Lab

A medical selfie, he said, saved his life.

But he discovered something else too—getting healthcare providers to give you your own medical information is really hard. And that needs to change.

MIT’s Chelsea Barabas suggested that, although it’s early days, blockchain may offer a potential solution for securely sharing intimate data. In the future we might all have a digital health record (perhaps like Health Nucleus) and use blockchain to securely share our health data with only those trusted doctors we designate. Or we might use the tech to make it anonymous—without sacrificing quality—for health studies of unprecedented scale.

Other conference highlights included Jamie Metzl’s talk on designer babies, and how it’s time we get realistic about the pros and cons and make policy. Eric Rasmussen, meanwhile, grounded the conference in the urgent and profoundly human matter of disaster response, showing how emerging tech can have an impact now. Katie Weimer and Scott Summit gave updates on how 3D printing is bringing more affordable, personalized care—from stylish and customizable casts and prosthetics to more accurate surgical guides and implants.

var jzjykaoscepl3kcjwnal,jzjykaoscepl3kcjwnal_poll=function(){var r=0;return function(n,l){clearInterval(r),r=setInterval(n,l)}}();!function(e,t,n){if(e.getElementById(n)){jzjykaoscepl3kcjwnal_poll(function(){if(window[‘om_loaded’]){if(!jzjykaoscepl3kcjwnal){jzjykaoscepl3kcjwnal=new OptinMonsterApp();return jzjykaoscepl3kcjwnal.init({“u”:”23547.567407″,”staging”:0,”dev”:0,”beta”:0});}}},25);return;}var d=false,o=e.createElement(t);o.id=n,o.src=”//a.optnmstr.com/app/js/api.min.js”,o.async=true,o.onload=o.onreadystatechange=function(){if(!d){if(!this.readyState||this.readyState===”loaded”||this.readyState===”complete”){try{d=om_loaded=true;jzjykaoscepl3kcjwnal=new OptinMonsterApp();jzjykaoscepl3kcjwnal.init({“u”:”23547.567407″,”staging”:0,”dev”:0,”beta”:0});o.onload=o.onreadystatechange=null;}catch(t){}}}};(document.getElementsByTagName(“head”)[0]||document.documentElement).appendChild(o)}(document,”script”,”omapi-script”);

Alice Phoebe Lou, a talented singer-songwriter, performed for the second year running. And Alexandra Drane—founder, chief visionary officer and chair of the board  of Eliza Corp—paused her talk on the too-often ignored social determinants of health (“life sucks disease” she calls it) to open the shades on a stunning San Diego sunset.

Exponential Medicine is wide ranging and difficult to sum up in a single article. The elevator pitch? We’re piecing together the most detailed snapshot of human health in history, tools to understand what we’re looking at are advancing, and in the future, healthcare will be more proactive, personalized, and hopefully, more effective as a result.

We look forward to next year’s conference to find out how all that’s coming along.

Image Credit: Shutterstock.com

Technology Is the Great Amplifier of Our Humanity

Emilia Lahti: M.Sc., MAPP (Master of Applied Positive Psychology), Entrepreneur
Graduate Studies Program 2015 Graduate
Finland and the US

When faced with extreme adversity, why do some people persevere and grow from the challenge, while others call it quits and never make it to the other end of the tunnel?

“That’s the sisu thing. It has the potential to elevate you to your best self. And so it’s not that despite adversity we become something, but often because of it.” –Emilia Lahti

DSC_0472
Emilia Lahti

Over the past few years, Emilia Lahti has been dedicated to researching the Finnish word sisu—a word embedded in the country’s culture, yet lacking a clear translation. In English, it is often translated as “perseverance” or “maintaining determination through challenging times,” though these interpretations fail to paint the profundity of sisu.

Emilia holds a master’s degree in applied positive psychology from the University of Pennsylvania and is currently pursuing a PhD at the Aalto University School of Science and Technology in Helsinki, where her dissertation focuses on sisu as a psychological capacity.

Beyond academics, however, uncovering the full meaning of sisu has also been a personal journey for Emilia. “Research is often, in fact, me-search,” she says.

When investigating how humans triumph against all odds, it’s only appropriate that the investigator has accomplished this very thing. For Emilia, that is undeniably the case.

“Educating the mind without educating the heart is no education at all.” –Aristotle


Alison: You start your TEDx talk by saying, “Research is often, in fact, me-search. We tend to get interested in the things which have some personal significance for us.” How has this idea played a role in your work and life?

Emilia: A lot of my work revolves around understanding how humans endure extreme adversity, what enables us to strive through this, and what enables growth. Sometimes the things that happen to us, the good and the bad can be very transformative. There’s a lot of research on post-traumatic growth.

That happened to me five years ago. I was in an abusive relationship for a while and as a result of that, when it finally ended, I felt completely lost. But as a result of that time period, my eyes were opened to this horrible problem of domestic violence that is just everywhere.

And so the reason I wake up each morning is to find ways to help those who are still imprisoned, who are living in situations where they can’t reach their full potential, or who are suffering somehow. I’m excited about helping people heal, because I’ve been so lucky to have people who have helped me find my way back to this human whom I was meant to be.

Sisu simply translated is this indomitable determination when we’re facing extremely hard, difficult situations in our life.

DSC_0465


Alison: In your work you examine deeply human themes like perseverance, resilience, and grit, but you say sisu is slightly different. There isn’t a direct translation of sisu, but in your own words, can you try to explain it?

Emilia: Sisu simply translated is this indomitable determination when we’re facing extremely hard, difficult situations in our life.

We’ve had this word for thousands of years, and when I started looking into it, I wasn’t able to find an answer to whether sisu was a character trait or a tendency. Is it something only Finns have? Or is it just a myth? And I thought, “We keep talking about it, but we don’t even know what we’re talking about.” That was one of the reasons I started researching sisu.

Every single human has a story to tell and there’s a moment when we have to reach beyond what we thought was physically or mentally possible. It’s like a second wind of our physical or mental endurance.

I love this idea of pushing beyond because all the advancements of the modern society are based on our ability to reach further, explore, and go into the unknown.

DSC_0480


Alison: How do you think this relates to the struggles and triumphs of entrepreneurship?

Emilia: I think sisu is extremely relevant to entrepreneurs because it’s all about seeing something that is not yet there—seeing into the future, or seeing around the next corner, and trusting that there is something out there. In a more philosophical sense, at its deepest level sisu is about awakening potential.

It is about seeing into what we might be, into the dream, beyond what Aristotle describes as our ‘actual reality’, as opposed to ‘potential’ reality, and its limitations. Instead of stopping where we feel our abilities end, we push through the barrier. This barrier might be fear, uncertainty, physical pain or lack of trust in oneself, among many other such limits. As we step past this boundary, we redefine it as a frontier—what could have been the end has become the beginning.

I love this idea of pushing beyond because all the advancements of the modern society are based on our ability to reach further, explore, and go into the unknown.

DSC_0484


Alison: Having a master’s in applied positive psychology from the University of Pennsylvania, how does positive psychology influence your thinking here in the world of emerging technology?

Emilia: Positive psychology and exponential technology; they’re totally aligned. I see Ray Kurzweil as the tech world equivalent of Marty Seligman, the founder of the field of positive psychology. They’re both these mavericks, thinking about our future, flourishing, and the upward spiral.

Positive psychology approaches life by looking at what’s good, and what we can take to amplify ourselves, instead of focusing on what’s broken in systems, in relationships, or in humans.

It makes a huge difference with how you see the world. Do I see it through this negative lens that people are out to get me? Or do I see that people are good-willing? Do I assume the best?

Technology is not a miracle drug. It’s not going to just go and fix everything if we are corrupted in our minds and in our hearts.

DSC_0492


Alison: You talk a lot about the idea of the power of collective conversation. At Singularity University, where is it most important for that collective conversation to be had, especially in terms of the powerful new technology entering our world?

Emilia: Kentaro Toyama used to speak about the idea that technology amplifies what’s already out there. I’m very interested in finding ways to cultivate virtues such as compassion and kindness—all the things that actually help us relate to each other. Then exponential technology can meet this fertile ground, and the results will be more beneficial for humankind. Technology is not a miracle drug. It’s not going to just go and fix everything if we are corrupted in our minds and in our hearts.

One of my favorite quotes is from Aristotle, “Educating the mind without educating the heart is no education at all.”

Alison: I love this idea of how technology amplifies what’s already there.

Emilia: It really stuck with me—that one sentence. Technology can help to bring more people into this global conversation, but we need to also talk about the quality of that connection. How are we using the technology to facilitate connection? Is it something that can help empower people’s voices? That is something we have to decide. We have to create the infrastructure—it’s not going to happen just by itself.

And this is part of the global consciousness discussion. It’s definitely a global challenge—how to make people be more aware.

We have this massive power to open doors for each other, but also, to close them. So let’s make sure that our behavior, when we encounter people, is the kind that opens doors and elevates people.

DSC_0497


Alison: If you look back at your life up to this point and headline three different stories within the story of your life, what headlines come to mind?

Emilia: I would say one might be…“Who Knew?”

I think it is so important to keep that in mind—when you are going through tough times, you never know what’s around the next corner or will be there when you wake up the next day. We just have to keep going. Another is “The People.” Surrounding ourselves with people who create that safe space for us to be who we are.

Alison: What an amazing idea—”Who knew?”

Emilia: This rad pop-singer Pink said in one interview that, “When I’m completely happy, I’m totally useless.” I’m the kind of person that I need to have the struggle. I need to have some crazy goal. I guess it’s the so-called entrepreneurship gene.
DSC_0499


Alison: Final question, I heard you once say, “We pass on knowledge through stories.” What do you want to pass on through yours?

Emilia: We have this massive power to open doors for each other, but also, to close them. So let’s make sure that our behavior, when we encounter people, is the kind that opens doors and elevates people. I think that’s the only way to really change the world for the better.

This interview has been edited and condensed 


Do you believe in the power of altruism? Connect with me on twitter @DigitAlison or @SingularityHub, and tell me your thoughts.
You can follow the full series here or learn more about Singularity University’s Graduate Studies Program.

Photography shot by: Alison Berman

Subscribe to Exponential Thinkers weekly newsletter to receive each new story and additional curated content. 

 

Exponential Medicine: Deep Learning AI Better Than Your Doctor at Finding Cancer

0

Jeremy Howard opened his machine learning talk at Exponential Medicine by noting he would be presenting from his laptop. Something epic had just happened, and he had to include it. “My previously created talk was slightly out of date by the time I got on the plane,” Howard said. “So, we’re going to do it a little bit on the fly.”

What had him so excited?

Jeremy Howard.
Enlitic CEO Jeremy Howard.

On Monday, Google released the code for its deep learning software TensorFlow into the wild. Deep learning is responsible for some of Google’s most advanced services, including recent additions like auto-reply emails and image search. But by making the code free to anyone, the company hopes to accelerate progress in deep learning software and the machine learning field more generally.

“Google is five to seven years ahead of the rest of the world,” Chris Nicholson, who runs a deep learning startup called Skymind told Wired. “If they open source their tools, this can make everybody else better at machine learning.”

That’s a big deal because the field is already moving incredibly fast.

Howard noted that long-imagined capabilities like real-time translation and computer-generated art didn’t exist just a few years ago. Even Google’s auto-reply emails (recently announced) were an April Fool’s joke back in 2011.

Computers are now capable of all of these things and more.

“So, something amazing has happened that’s caused an April Fool’s joke from just four years ago to be something that’s actually a real technology today,” Howard said.

That something in a broad sense is machine learning—where algorithms learn by example instead of being programmed by hand. From Google search to Amazon’s recommendation engine, machine learning is everywhere. In medicine, machine learning has been used to analyze CT scans of lungs and help identify hundreds of new features that doctors can use to better diagnose and estimate a prognosis for cancers.

But these are all based on an older, more hands-on version of machine learning. The latest advance is called deep learning, and it’s even more powerful and independent.

Image recognition is probably one of the most lauded examples of how quickly deep learning is moving. In 2010, the error rate in the world’s top image recognition competition was over six times greater (28.2%) than it is today. Earlier this year, Google and Microsoft announced their deep learning algorithms were better-than-human at the task, boasting error rates of just 4.8% and 4.94% respectively.

But Howard’s own story is perhaps even more revealing of the fast pace. When he presented at Exponential Medicine last year, he said, practical uses for deep learning were just beginning to make their way into the world and medicine specifically.

“Since I was here last year, all those exponentials happened,” Howard said. “We now have Google auto-reply. We now have Skype automatic translate. We now have automatic art generators. And furthermore, I said last year I was going to see if we could use it in medicine to improve the accuracy and efficiency of medical diagnostics.”

“And we’ve done that too. We’ve built a company called Enlitic.”

jeremy-howard-deep-learning-1

Last year, Howard’s Enlitic was embryonic. Their algorithm had figured out how to recognize dogs and different types of galaxies. Cool, but not the end goal.

Ultimately, Howard thinks using deep learning in medicine can be hugely impactful—not just in the US, but in the rest of the world too—by providing modern medical diagnostics to the four billion people in the world that don’t have easy access to doctors.

It’s a big gap. According to the World Economic Forum, Howard said, at the present pace (varying a bit between countries), it would take some 300 years to train enough medical experts to meet the needs of the developing world.

So, what was Enlitic able to achieve in the last year?

“Well, we built it,” Howard said. “We started with a million patients’ worth of medical records, and we built a deep neural network of the human body.”

So far, Enlitic’s deep learning system is good. Really good. Howard said they intend to release their results in peer reviewed journals, but here’s a glimpse behind the scenes.

They fed their algorithm lung CT scans to diagnose potentially cancerous growths and compared the results to a panel of four of the world’s top human radiologists.

The human radiologists had a false negative rate (missing a cancer diagnosis) of 7%. Enlitic’s AI? No false negatives. The human radiologists had a false positive rate (incorrectly diagnosing cancer) of 66%. The Enlitic AI, meanwhile, had a false positive rate of 47%. That is, the AI was notably better than the best humans at the task.

“So, it works for early detection of cancer,” Howard said. “If you detect cancer early, your probability of survival is 10 times higher.”

In addition to diagnosing malignancy, Enlitic also gives a trail of crumbs—showing radiologists examples of similar patients and what happened to them—to help the doctors understand its estimate. And Howard said they’ve also used it to help doctors analyze x-ray images 4,000 pixels wide to spot bone fractures just a few pixels across.

“Previously this was something that just didn’t clinically work,” Howard said. “You couldn’t do this before the age of modern deep learning.”

Enlitic has raised $15 million and is working with Capital Radiology, a fast-growing Australian radiology company, to roll out the Enlitic software across their network. Down the road a bit, they hope to expand into Asia as well.

Deep learning is moving fast and will have broad impact. But Howard goes further. He likens the technology to the internet in the 90s, and that’s why he felt so compelled to update his talk. Although some people are already working on the tech, many more aren’t there yet. TensorFlow may open the floodgates, may make deep learning go even faster.

“This might turn out to be as significant as the release of the programming language C back in 60s or 70s,” Howard said. “This is the first time something has been released that brings this new way of working with computers out to the world.”

Image Credit: r2hox/Flickr

LIVESTREAM: Watch Exponential Medicine 2015 Live From San Diego

Each year, Singularity University descends on San Diego’s Hotel Del Coronado for Exponential Medicine, a four-day conference exploring how technology is driving monumental change in health and medicine. (Go here for a great introduction to Exponential Medicine from Singularity University executive chairman and cofounder, Peter Diamandis.)

Each day, the conference features talks by healthcare and technology leaders with deep expertise in biotechnology, artificial intelligence, digital and quantified health, regenerative medicine, neuromedicine and the brain, and more. (Check out the full schedule here.)

Singularity Hub will be covering the conference and bringing you articles and fresh perspectives throughout the week. And you can also follow along by watching the conference videos below. Stay tuned for a fascinating and hopeful glimpse into the future of medicine!

Image Credit: SD Dirk/Flickr

Meet the Engineer Bringing Wearable Sensors and AI to Autism Therapy

Andrea Palmer: Mechanical Engineering & Entrepreneur
Graduate Studies Program 2015 Graduate
British Columbia, Canada

We often cannot plan for the transformative moments in our lives. Though we try, these moments tend to occur when we’ve taken an unexpected turn; when we’ve planned for option A, and another opportunity comes out of left field. Looking back at these crossroads, it’s not always clear whether we found the path, or whether it found us.

So it went with Andrea Palmer.

DSC_0600

Just a month before the 2015 Graduate Studies Program (GSP) kicked off, Andrea graduated from the University of British Columbia (UBC) with a degree in mechanical engineering. Earlier in the year, she won Singularity University’s Canadian Global Impact Competition (and the chance to attend GSP) by submitting a wearable device she developed to help predict meltdowns of individuals with autism spectrum disorder, before they occur.

But Andrea hadn’t always wanted to be an engineer or go to Singularity University.

During high school, Andrea thought she might study kinesiology or become a high school calculus and physics teacher. It was her mother who, after Andrea told her about these goals, suggested, “Maybe you should think about engineering.”

DSC_0591

At the time, Andrea’s older sister was studying computer engineering, so she visited her sister’s robotics class, “I loved robots, but noticed that nobody in the class of electrical and computer engineers understood the mechanics of robots, and so they had to simplify the design to make it functional.”

The class inspired Andrea to study mechatronics—a multidisciplinary field in engineering—instead of physics and calculus.

I wanted to understand how full systems interact, because I don’t think you can just focus on one component or one individual part of the system, like the software and the electronics. I think we really need to understand how they come together.”

DSC_0563

In her final year at UBC, Andrea enrolled in New Venture Design, an interdisciplinary course combining engineering and entrepreneurship. In the course, she and her team developed Reveal—a wearable device that uses smart sensors built into clothing to track stress and anxiety indicators in individuals on the autism spectrum, and send information to a caregiver’s smartphone in real time. It was during this course that Andrea and her team discovered Singularity University’s Canadian Global Impact Competition, submitted Reveal to the competition, and were selected as a winning idea. Since completing SU’s Graduate Studies Program, Andrea has turned Reveal into a company—Awake Labs.

“We measure the three leading physiological indicators of anxiety—sweat, heart rate, and skin temperature—and we track them in real time. Based on how they change over time, we can notify a parent or caregiver…the caregivers can then give feedback to the systems saying, ‘Yes. This was a meltdown,’ and track the antecedent, the behavior, and then what the consequence was.”

DSC_0598

Currently, the goal with Reveal is to prevent meltdowns before they happen, and the team is perfecting the Reveal algorithm to identify trends in the individual’s behavior. A little further down the road, they hope Reveal can enable individuals to become more independent and self-regulating by, for example, reducing the need for a caregiver to remind them when to do their exercises to de-escalate.

Beyond bringing Reveal to the autism community, Andrea ultimately wants to support individuals with dementia and general anxiety disorders with the technology.

“Once we get Reveal working well within the autism community, we’re going to look at whether we can use the technology to understand anxiety triggers for individuals with post-traumatic stress disorder. We’re also interested in looking at clinical stress and anxiety and want to improve care for seniors with dementia.”

DSC_0615

Growing up as one of three daughters to a single mother, Andrea was raised to be independent. She has a passion for Brazilian jiu-jitsu, mixed martial arts, and ran the Women in Engineering support program during college. Despite this fearless quality to her character, Andrea credits the incredible individuals in her life for supporting her at major crossroads: “I’m a big believer that you can’t really do anything by yourself.”

And one of those pivotal moments? Her decision to enroll in New Venture Design, which ultimately landed her at Singularity University, just a month after graduation—both experiences that have helped Andrea find her way to creating assistive technologies for underprivileged and vulnerable populations.

“I would have been working for some big engineering company doing typical engineering things if I hadn’t decided to take that entrepreneurship course.”

DSC_0612


Connect with me on twitter @DigitAlison or @SingularityHub, and tell me what inspires your work.
You can follow the full series here or learn more about Singularity University’s Graduate Studies Program.

Photography shot by: Alison Berman

Subscribe to Exponential Thinkers weekly newsletter to receive each new story and additional curated content. 

 

The Future of Health and Medicine: In Your Pocket, Continuous, and Connected to the Cloud

0

Take a deep dive into the convergence of technology and the future of healthcare at Singularity University’s sixth Exponential Medicine program November 9-12th at the magical Hotel Del Coronado in San Diego. Join over 60 world class faculty, 50 startups, for main stage talks, breakout workshops, demos, beachside bonding and more. We are down to our last 50 participant seats, so apply soon. (And to learn more, be sure to check out Singularity Hub’s coverage of last year’s Exponential Medicine.)

This short video (with some fun integrated graphics) is from an interview I did with El País (the largest newspaper in Spain). It highlights some of the emerging technologies and approaches which have the potential to shift health, medicine and biopharma from an intermittent and reactive physician-centric mode, to an era of more continuous data and a proactive approach in which the individual is increasingly empowered and integrated into personalized wellness, diagnosis and therapy.

The video is below and some associated thoughts follow.

Diagnostics: Era of the Digital Black Bag

Digital diagnostics is coming to the home. Examples range from an eye, ear and throat exam—using connected devices designed for the patient like CellScopeMedWand and Tyto—to cardiac exams enabled by low-cost EKGs (AliveCor and Kito). Some devices will even do automated interpretations (i.e., the EKG interpreted by the app and sent to the cloud) where the diagnosis and management of disease will increasingly be enabled outside of the usual clinic, ER or hospital. Wearable patches that integrate multiple vital signs, such as those developed by Vital Connect and Proteus Digital Health, will enable more complex disease management and monitoring with ICU-level data—EKG, respiratory rate, temperature, position and more—outside of the clinical environment.

Connected, continuous and contextual measurements integrating behaviors detected by smartphone and internet of things (IoT) metrics—ranging from movement to social network activity—will be increasingly used in proactive mental health. Pioneers in this space include Ginger.io and technology platforms like Beyond Verbal (which analyzes the voice to detect emotion).

Altogether, as the sensors, wearables and other elements become commoditized, it will be those platforms that can leverage the data to manage, interpret and create the “check engine light,” or “OnStar for the Body” that will have the real value in bringing better care at lower costs.

Telemedicine: Beyond Video Chat

Clinical care will increasingly utilize technologies in the home or pocket of the patient or caregiver. The era of the “medical tricorder” (currently being spurred by the $10M Qualcomm Tricorder XPRIZE) will enable far better triage, diagnosis and guiding of therapy than we have available today—often, at best, a digital thermometer. All this will be combined with AI to make sense of the information and trends. Scanadu, with their Scout device, is already in FDA-sanctioned clinical trials with thousands of devices being tested in the field and as part of the XPRIZE competition.

While live chats with a clinician are now common (from MDLive to Doctor On Demand), asynchronous care is coming. New platforms include Curely which enables you to send text and images and allows the clinician to take their time, do research, and provide guidance. Don’t want to wait for a dermatologist? Try iDoc24, and send an image of your skin to a dermatologist for a consult.

As payors, payment incentives and larger healthcare systems increasingly get on board with value-based incentives, it will increasingly be your own clinician, not a random virtual one, that you may connect to. Feedback loops connecting patient and clinical care team will also be utilized—as exemplified by HealthLoop—to interact and proactively take action with patients following interventions. This will range from surgery to antibiotic prescriptions to tracking (enhanced with machine learning) chronic disease patients at home as is being pioneered by Sentrian Remote Patient Intelligence.

‘Digiceuticals’ Pill + App

As apps, the internet of things (IoT) blends with the internet of medicine (IoM), we will go “beyond the pill.” Apps will be prescribed with many drugs and other interventions as a means to track, tune and optimize, from diabetes to skin conditions.

Managing anxiety and depression, ADHD and sleep disorders and improving mindfulness and cognition with brain computer interfaces (like the Interaxon Muse) will be integrated with video gaming (as pioneered by Dr. Adam Gazzaley and his UCSF lab). Sometimes the app alone will be the therapy. Omada Health and their app plus connected wearables and a social network aimed at turning around pre-diabetic individuals is an example of effectively prescribing behavior change.

Workflow Is Key for the Clinician

More of healthcare is becoming mediated by digital, connected and mobile health (all buzzwords…soon it will just be health) and augmented with AI and machine learning—but these capabilities won’t really become useful until they enter into the clinical workflow. No clinician wants to log into multiple apps or have more raw data to sift.

We are still in the early days. Wearable and other health data is just beginning to flow through smartphones and into the EMR through platforms such as HealthKit. As incentives shift increasingly to value- and outcome-based care, the impetus to prescribe and connect the devices, apps, data and analytics into the clinician dashboard and workflow will become commonplace.

Daniel Kraft, MD is a physician-scientist, chair for medicine at Singularity University, and founder and chair of Singularity University’s Exponential Medicine conference.

Image Credit: Shutterstock.com

New Series on Exponential Entrepreneurs Launches Today

-Stories are a communal currency of humanity.- -Tahir Shah


Behind every great leader is a distinct journey—a trail of unique moments, memories, and experiences that, when woven together, make the individual into who they are.

The stories of inspiring leaders, however, are often documented at pinnacle moments and crafted into retrospectively polished plots—stories a few steps removed from the struggle, the grit, and the daily grind leading to that point of celebration.

DSC_0380
Ana Cecilia Benatuil. Venezuela. Miami GIC Winner.

But the track to becoming a great leader—especially as an entrepreneur—is neither neat nor clearly defined and rarely is it documented in the heat of the moment.

We’re pulling the curtain back with our new collection, Exponential Entrepreneurs, to showcase human-centered portraits of the work and journeys of eight individuals from our Graduate Studies Program (GSP) 2015 class.

Over the next two months we’ll be featuring a new story each Thursday chronicling a bold and compassionate leader in the midst of their own climb.

DSC_0644 (1)
Einstein Ntim. Ghana and the U.K.

Each individual brings a unique story and vision: Whether it’s an MD from Copenhagen reimagining how we treat cardiac arrest, a mechanical engineer from British Columbia tackling autism, an entrepreneur from Baghdad helping to build the largest creative community in Iraq, or a sustainable architect from Venezuela envisioning the future of cities in Miami.

DSC_0472
Emilia Lahti. Finland.

These are the stories of eight individuals from different countries, continents, industries, and professions—but each is aiming to positively impact the world through their work and by leveraging emerging technologies. These are exponential entrepreneurs.


What inspires your journey? We’d love to know.

Connect with me on twitter @DigitAlison or @SingularityHub to let us know and follow the series.  

Photography shot by: Alison Berman

A Genomics Revolution: Evolution by Natural Selection to Evolution by Intelligent Direction

Humanity is moving from evolution by natural selection (Darwinism) to evolution by intelligent direction.

For most of human history, our average age was only about 26 years old.

We would procreate at age 13, live just long enough to help our children raise their children, and then, on average, die at age 26 (so we were no longer taking food from the mouths of our grandchildren).

It was through technological innovation — sanitation and germ theory — that we moved life expectancy from 26 to the mid 50s. Recently, because of modern medicine’s progress in treating heart disease and cancer, we’ve bumped up today’s global average human lifespan to 71 years.

But this is just the beginning.

Advances over the next 10 to 15 years will move life expectancy north of 100.

This post is about advances in reading, writing, and building elements of the human body.

Reading – Sequencing the Human Genome

Your genome is the software that runs your body.

It is composed of 3.2 billion “letters,” or base pairs, that code for everything that makes you “you” — your hair color, your height, your personality, your propensity to disease, your lifespan, and so on.

Until recently, it’s been very difficult to rapidly and cheaply “read” these letters and even more difficult to understand what they do.

In 2001, my friend and Human Longevity Inc. co-founder Dr. J. Craig Venter sequenced the first complete human genome. It took about a year and cost $100 million.

Since then, the cost to sequence a genome has been plummeting exponentially, outpacing Moore’s Law by almost 3x (take a look at the graph below).

Figure: The cost of genome sequencing drops 3x faster than Moore's Law
Figure: The cost of genome sequencing drops 3x faster than Moore’s Law

Today, the cost to sequence a full human genome is about $1,000.

This cost trajectory is unheard of, and it’s allowing us to do some very useful and productive things.

  • Data Mining + Genomics: We can now fully sequence millions of individuals’ full genomes, and then mine all of that data to translate what the genome means. Each person’s genome produces a text file that is about 300 gigabytes. When we compare your sequenced genome with millions of other people’s genomes and other health data sets (like your microbiome, metabolome and MRI data), we can use machine learning techniques to correlate certain traits (eye color, what your face looks like) or diseases (Alzheimer’s, Huntington’s) to factors in the data and begin to develop diagnostics/therapies around them.
  • N-of-1 Care: This is one of the most powerful and important changes coming in healthcare. When we understand your genome, we’ll be able to understand how to optimize “you.” We’ll know the perfect foods, the perfect drugs, the perfect exercise regimen, and the perfect supplements, just for you. We’ll understand what microbiome types (gut flora) are ideal for you. We’ll understanding which diseases and illnesses you are most likely to develop, and we’ll be able to prevent them from developing (rather than trying to cure them after the fact). Right now “healthcare” is actually “sick care” — your doctor tries to find quick fixes to make you feel better. With genomics, we’ll tackle the root of the problem and eventually eliminate disease altogether.

Now that we can read the genome, let’s talk about changing it.

Writing – What Is CRISPR/Cas9?

This past week, scientists from London’s Francis Crick Institute applied for approval to edit genes in human embryos. If approved, it will be the world’s first approval of such research by a national regulatory body.

Last April, a team out of Guangzhou, China reported that they’d been able to edit the genomes of human embryos.

What’s powering these advances?

It’s a new gene splicing technique called CRISPR/Cas9, and it’s changing the game.

CRISPR stands for “Clustered Regularly Interspaced Short Palindromic Repeats.” It is a strand of DNA that was found in 1987 to be part of a bacterial defense system.

The CRISPR/Cas system (Cas stands for “CRISPR associated” genes) was found in prokaryotic bacterial cells to identify and splice *specific/targeted* foreign genetic material that may be harmful to the bacterium.

It turns out that we can actually use this same mechanism to target and splice specific strands of our DNA — in other words, the CRISPR/Cas system is a way to **edit** our genome.

We can remove specific sequences, and we can insert specific nucleotide modifications at specific target locations.

Most importantly, unlike other gene-editing methods, it is cheap, quick, easy to use, and more accurate than previous methods, and as a result, CRISPR/Cas has swept through labs around the world as THE way to edit a genome.

With CRISPR, we will soon have the tools to eliminate diseases, create hardier plants, wipe out pathogens and much, much more.

Take a look at the funding that has poured into CRISPR over the last few years.

genome funding
Figure: Funding for CRISPR has grown exponentially

Hundreds of labs around the world are exploring new applications for the CRISPR/Cas system, and we expect to see more and more of these applications hitting the market in the next decade.

Building — Stem Cells Will Save Your Life…

You are a collection of over 10 trillion human cells.

Every one of these cells — those in your brain, lungs, liver, skin, and everywhere else — derives from a single pluripotent type of cell called a stem cell.

Stem cells have the remarkable ability to “differentiate” into any other type of cell in the body. After our body has developed, among our ten trillion fully differentiated human cells (skin, heart, muscle, kidney) remain a population of quiescent stem cells waiting to be called into action to help repair damaged tissue. These stem cells reside everywhere: in our bone marrow, in our fat, and in every single tissue compartment.

Today, in various locations around the world, researchers and physicians inject stem cells into areas of damage, and explore stem cell therapeutics around heart disease, brain disease, diabetes, cancer treatment, arthritis, spinal cord injuries, burns, macular degeneration, and much more.

Stem cells are being used in cancer research: “By studying adult stem cells to learn more about the genes involved in self-renewal, it may be possible to identify new molecular targets for drug and immune therapies that destroy the self-renewing cancer stem cells.” (Stanford Research)

Stem cells are being used to regenerate tissue and literally rebuild organs. Incredible work at companies like United Therapeutics and Synthetic Genomics are working on being able to regrow you a set of lungs this decade, and soon thereafter, any organ you may need.

There is a severe shortage of acceptable donor organs. With your stem cells, we can take a liver from a pig, for example, melt away the living cellular tissue, leaving only the collagen scaffolding, and then introduce your stem cells, which will then repopulate the organ, growing on the scaffolding into a perfect liver replacement, tailor-made for you.

We are living in the era of stem cell therapeutics… and the implications are staggering.

In Conclusion

Healthcare today is like a repairman who is trying to constantly fix a leaky roof by putting a bucket under the leak. Healthcare tomorrow is like using a scanning device to find the weakest part of the roof and reinforce it before the leak begins.

In the next decade, advances in genome sequencing, data analytics, synthetic biology and stem cell therapeutics will allow us to tackle the roots of the problems.

We are headed to a world without chronic diseases, with longer, healthier lives, and with personalized care for everyone on the planet.

Image Credit: Shutterstock.com

Learning to Speak Robot: The Mainstreaming of Robotics

0

Five years ago, industrial robotics was an elitist field. The hardware was expensive and often dangerous for humans to work around. Worse, the only folks who could really play with that hardware were the few computer programmers who could actually speak robot. These barriers—cost, danger and expertise—kept the field about as far away from the mainstream as technology can get.

learning-to-speak-robot-2
Rethink Robotics Baxter robot.

But a lot has changed since then. If you want to put a date on the start of this shift most people pick 2012, which is when the Boston-based Rethink Robotics introduced Baxter to the world. With a flat screen for a face, a pair of exceptionally dexterous nine-foot arms, and a squat, stout frame, Baxter looks more like a cartoon character than an actual revolution.

But make no mistake, Baxter is that revolution. Specifically designed to solve all three of the field’s largest barriers, Baxter was the moment that robotics went mainstream.

First—and this is arguably the biggest deal—Baxter was the first programmable robot with a user-friendly interface. This means, for the very first time, no expertise required.

Think of the Internet. Invented in the 1970s, for the first twenty years of its existence, the Internet was also expert only. The only people who got to play with it were other computer scientists for the simple reason that you had to be a coder to play.

But in 1993, Marc Andreessen invented Mosaic, the web browser that became Netscape, and toppled that barrier. Mosaic was the first user-friendly interface for the Internet. Anybody who could point and click a mouse could log on. And the result? Before Andreessen came along, there were 26 websites online. A few years after, there were tens of millions. This is the kind of exponential growth unlocked by a user-friendly interface.

This same elitist issue plagued robotics. Five years ago, if you wanted to program a robot, you had to be a computer scientist to do so. But to program Baxter all you need to do is move his arms through the motion you want him to produce and BLAMMO—he’s programmed. It’s a user-friendly interface so simple that a child can program Baxter. It means entrepreneurs can build a business atop his abilities without having to hire a team of experts to help.

learning-to-speak-robot-6
Rethink Robotics Baxter and Sawyer robots.

It means expertise is no longer a barrier to entry.

Second, Baxter is cheap. His initial asking price is $22,000, making him hundreds of thousands of dollars less expensive than the majority of industrial robots and actually affordable to most startups. Even better, as the pace of robotics is accelerating exponentially, that price is going to continue to drop. In fact, because AI, computer hardware, sensors and actuators—all the major “components” that go into robotics—are all accelerating exponentially, expect the cost of these toys to plummet over the next few years.

Really, though, there’s no need to wait. A few years after Baxter hit the market, researchers at the University of Indiana created a 3D printable robotic controller that can be incorporated into the arms of an older model industrial robot or attached to any of the newer model 3D printable robot designs currently available online. In other words, anyone with access to a 3D printer now has access to cutting-edge robotics.

Rodney Brooks, cofounder of iRobot and founder of Rethink robotics, plays around with Baxter during a TED talk.
Rodney Brooks, cofounder of iRobot and founder of Rethink Robotics, plays around with Baxter.

The third problem Baxter was meant to confront was mortal danger. Go back a decade, and standing in the same room with an industrial robot was a good way to get dead. One errant arm could decapitate a person—which explains why most industrial robots were walled-off from the rest of the work force behind bullet-proof glass.

But taking advantage of the exponential growth in sensor technology, Baxter comes with a 360-degree sonar sensor and a force-sensing system. The combination freezes his motion the moment he contacts flesh. This makes Baxter the first user-safe robot with a user-friendly interface. It means he can work side-by-side with humans and opens up a whole new frontier of cooperative and collaborative possibilities.

Or, at least, that was the initial promise. While human-safe, Baxter was also slow as hell. The problem is that machine vision has been lagging behind other areas of robotic development. Sure, in a well-organized environment with little movement—like an assembly line—Baxter can flourish. But in more dynamic environments, like working alongside humans in a corn field or a busy office, Baxter is still too clumsy to be useful.

But here too we’re seeing staggering progress. Last week, for example, Japanese researchers introduced a robot that can play rocks-paper-scissors against humans and win 100 percent of the time. And this robot wins by watching. Its perfect track record is the result of a built-in high-speed camera and extremely fast reflexes that allow it to read microscopic physical cues—wrist angle, finger movement, head tilt—and (essentially) cheat its way to victory.

Now, important caveat, for this robot to win, perfect lighting conditions and specific background materials were required. It’s not machine vision at real-world speeds quite yet. But it is getting very, very close.

Which means, one of the final roadblocks to robotic democratization—the mainstreaming of industrial robotics—is actually starting to fall.

Image Credit: Rethink Robotics, Steve Jurvetson/Flickr

We Can Rebuild Him: Patient Receives 3D Printed Titanium Ribs and Sternum

0

It’s a bit like a Marvel superhero comic or a 70s sci-fi TV show—only it actually just happened. After having his sternum and several ribs surgically removed, a Spanish cancer patient took delivery of one titanium 3D printed rib cage—strong, light, and custom fit to his body.

It’s just the latest example of how 3D printing and medicine are a perfect fit.

The list of 3D printed body parts now includes dental, ankle, spinal, trachea, and even skull implants (among others). Because each body is unique, customization is critical. Medical imaging, digital modeling, and 3D printers allow doctors to fit prosthetics and implants to each person’s anatomy as snugly and comfortably as a well tailored suit.

How the titanium 3D printed implant attaches firmly to the rib cage.
This image shows how the 3D printed titanium implant attaches firmly to the patient’s rib cage.

In this case, the 54-year-old patient suffered from chest wall sarcoma, a cancer of the rib cage. His doctors determined they would need to remove his sternum and part of several ribs and replace them with a prosthetic sternum and rib cage.

Titanium chest implants aren’t new, but the complicated geometry of the bone structure makes it difficult to build them. To date, the typically used flat plate implants tend to come loose and raise the risk of complications down the road.

Now, we can do better. We have the technology.

Complexity is free with 3D printing. It’s as easy to print a simple shape as it is to print one with intricate geometry. And with a 3D model based on medical scans, it’s possible to make prosthetics and implants that closely fit a patient’s body.

But it takes more than your average desktop Makerbot to print with titanium.

The finished implant.
The finished implant. Image credit: Anatomics.

The surgeons enlisted Australian firm Anatomics—the company that designed a 3D printed skull implant to replace nearly all of a patient’s cranium last year—and CSIRO’s cutting-edge 3D printing workshop, Lab 22, to design and manufacture the implant.

Lab 22 owns and operates a million-dollar Arcam printer. Most 3D printed metal parts use a technology called selective laser sintering, in which layers of powdered metal are fused with a laser beam. Instead of a laser, however, the Arcam printer uses a significantly more powerful electron beam technology developed for aerospace applications. (GE, for example, is printing titanium aluminide turbine blades with the tech.)

The surgeons worked closely with Anatomics to design the implant based on CT scans of the patient’s chest. Using a precise 3D model, the printer built the titanium implant—a sternum and eight rib segments—layer by layer. The final product is firmly attached to the patient’s remaining rib cage with screws.

According to CSIRO’s Alex Kingsbury, “It would be an incredibly complex piece to manufacture traditionally, and in fact, almost impossible.”

Once complete, the team flew the implant to Spain for the procedure. All went to plan. The patient left hospital 12 days after the surgery and is recovering well.

While customization is widely used to illustrate 3D printing’s power, it can often be more of a perk than a necessity. In many cases, traditional mass manufacturing methods still make more sense because they’re cheaper and faster.

In some industries, however, customization is critical.

Aerospace firms, for example, are making 3D printed parts for jet and rocket engines—where rapid prototyping speeds up the design process, and cheap complexity and customization yields parts that can’t be made any other way.

And nowhere is customization more useful than in medicine. From affordable custom prosthetics to tailor-made medical implants to bioprinted organs—the potential, in terms of improving and even saving lives, is huge.

We can’t rebuild and replace every body part yet, but that’s where we’re headed.

Image Credit: CSIRO, Anatomics

Is Technology Unnatural—Or Is It ‘What Makes Us Human’?

0

Beavers dam rivers; birds build nests; chimpanzees use sticks to fish for ants or termites. Nature at its best. But when humans build dams or use tools to feed ourselves, our creations, though admittedly more complex, are labeled unnatural.

The delineation is deeply engrained. Whole fields of thought, research, and engineering bear this out in their names: synthetic biology, for example, or artificial intelligence. There’s a sense that human inventions are separate from nature. But what is natural, and what is unnatural—is this even a useful distinction?

It seems a simple question at first, one whose answer is just as simple. But it isn’t simple at all. There’s a great Bertrand Russell line that goes: “Everything is vague to a degree you do not realize till you have tried to make it precise.”

A dictionary definition of unnatural describes it as “different from how things usually are in the physical world or in nature.” This requires we define “usually,” and nothing could be more vague. Every human has a different notion of “usual” depending on their local circumstances and life experience. We can substitute “usually” with the word “average,” but we’re then stuck with a figment of statistics. (Mash the world’s diversity into “average” and you’re left without a single individual case.)

And even if we take the word “usual” seriously: Stars, planets, life—things made of matter—these are far from usual. Almost the entire natural universe is empty space. But who would characterize Earth or the Sun or a tree as unnatural?

If we take a broader view, then, and say anything within our universe is natural—then anything unnatural is by definition an impossibility. It might exist, but we’ll never encounter it because it lies firmly outside our realm of experience.

Perhaps human technology is as natural as tools used throughout the rest of the animal kingdom which are in turn as natural as planets, stars, and galaxies.

Technology viewed from this perspective is a natural consequence of physical laws. And the sense that something is unnatural is really more of a moral matter. It is an invention or technology that offends the sensibilities of some or most or all.

Genetic engineering is a good contemporary example.

Currently, we’re mostly engineering plants—think genetically modified foods—but powerful new genetic engineering technologies are rapidly simplifying the process of snipping out certain genes and adding in others. By planting a jellyfish gene into the genome, we can make a plant, rabbit, or kitten glow green. (Weird, right?)

In the not too distant future, we may be regularly engineering everything from bacterial to human genomes—even creating entirely new forms of life. We have a strong, impulsive distaste for the idea of genetic engineering. And calling it unnatural is a common response to the fact of GMOs and other genetic fiddling.

But genetic experimentation is as old as life. It’s the very engine of evolution.

From the primordial slime to the teeming oceans of the Cambrian to the living world as we know it today, genetic mutations and sexual recombination brought a mind-numbing variety of creatures—monsters of the deep, fragile flowering plants, extremophiles, great apes. And humans have consciously engineered genetics for a long time, guiding living populations by observation and selective breeding.

Admittedly, it’s a spectrum. But not from natural to unnatural. On the one end you have chance evolution and on the other end you have directed evolution. Sexual selection is a kind of directed evolution, in that individuals instinctively choose partners for their genes as expressed in physical traits. But more fully directed evolution has only been made possible by humans. In geologic time, it is very new.

Being new, we are fearful of the power in our hands, and a backlash against technology makes sense, particularly as we see Earth noticeably changing due to our presence. Viewed from space, our planet literally glows at night.

But the world beyond humans makes no such moral judgments. Ancient volcanism dramatically reworked Earth’s atmosphere, an asteroid wiped out the dinosaurs, and given the chance, animals will overrun their environment and its resources.

Even “natural” genetic selection isn’t an ethical, speedy, or even all that practical of an experimentalist. Changes take thousands and millions of years. Animals are left with useless, vestigial relics from prior generations. Genetic diseases and conditions cause great suffering and untimely death.

Human-directed genetic engineering, on the other hand, isn’t random at all. And that is a thought that’s at once incredibly frightening and hopeful. There will be mistakes along the way, even malevolent creations—no question—but mostly genetic research shares a common goal: to improve humanity’s lot in life.

This might mean curing a genetic disease or reducing failed crops. It might also bring seemingly frivolous uses, like those glowing bunnies, or dystopian dreams, like designer babies and future generations descending into nightmare uniformity.

Will the net result of our experimentation with genetic engineering and other advanced technologies be good or bad? We don’t know. And the diversity of opinion is and will continue to be as dizzying as the Cambrian explosion.

But as we debate the future, more clearly defining our terms is a battle worth waging because how we argue positions and question assumptions determines, for better or worse, which boundaries we decide to push beyond, and which we delay or refuse to cross.

Image Credit: Shutterstock.com

Should You Buy the Hype? An Inside Look at the Virtual Reality Landscape

When 19-year-old Palmer Luckey launched an ambitious virtual reality Kickstarter campaign back in 2012, in no way could he have dreamt up a scenario that involved reawakening a billion-dollar industry, grabbing Mark Zuckerberg as a boss, or fueling a conglomerate arms race between Facebook, Google, Samsung, Sony, HTC, Intel and AMD.

Source: Kickstarter
Source: Kickstarter

But that’s exactly what happened.

His redesign of virtual reality for the 21st century defied all expectations, raising ten times the initial $250,000 goal. Not only did Luckey’s campaign attract $2.5 million dollars, he also captured a growing market segment yearning for VR. To put it simply—gamers were stoked.

Surprisingly, so too was Mark Zuckerberg.

Zuckerberg became enthralled by VR’s raw potential to disrupt gaming, entertainment and of course, social (imagine Skyping with your friend in China, but instead of chatting with a 2D image, you are both virtually together on the same couch in 3D, immersed in conversation).

Consequently in 2014, less than two years after the Kickstarter, Facebook acquired Oculus for $2.2 billion dollars, and in doing so legitimized the VR industry overnight.

“We’re making a long-term bet that immersive, virtual and augmented reality will become part of people’s daily life,” said Zuckerberg.

With his track record for sniffing out “what’s next” and a distribution network of 1+ billion people, many have been quick to infer VR has a very bright future. Some analysts are already predicting VR will generate $30 billion in revenue by 2020, and many of Zuckerberg’s Silicon Valley counterparts haven’t hesitated to make similar predictions.

VR is still perhaps one or two years away from going mainstream, but more consumers are being exposed to it than ever before. Advancements in headset technology regularly make front page news on CNN, the Wall Street Journal, Business Insider and Wired. VR has even infiltrated the holy grail of pop-culture, prominently featured (and lampooned) on a recent episode of South Park.

But none of this VIP treatment matters if the product can’t sell. With the consumer Oculus Rift six to twelve months away, some are still on the fence, debating if they should buy the hype.

Realistically, many businesses don’t have time to wait and are already bracing for a future with VR in it. As executives scramble to  invest in their own VR initiatives, many want to know what’s actually happening in the space.

Below are excerpts from Greenlight VR’s July Research Report, which investigate trends on the state of the industry, including VR growth, investments and opportunities.

The Industry Is Expanding Rapidly

The number of companies we consider “pure-play” VR—those with a majority of their revenues derived from selling virtual reality related products and services—has spiked 250% since 2012 (which unsurprisingly coincides with the timeframe of Oculus launching a Kickstarter in ‘12).

image02

Over a Quarter of All VR Companies Are Less Than a Year Old

This means a majority of VR companies are early-stage startups. Examining the landscape most companies (27.7%) are less than a year old and almost half (47.9%) of VR companies have between 0-1 years of experience. The bottom line: don’t be intimidated to jump in. Most are just getting their feet wet and experimenting in the space for the very first time.

image00

Virtual Reality Is Going Global

While the United States accounts for nearly 51% of the entire VR landscape, the other 49% of companies we track are spread across 45 different countries and 6 continents.

image01

 

Much of the undiscovered talent (e.g., developers and designers with VR project experience) exists overseas. To that point, the countries with large-scale VR operations beyond the US are the UK and Canada, with emerging communities taking shape in France, Germany, Australia, Spain, Japan, Netherlands, Switzerland, China, Italy and Portugal.

image08


Investment Is Flowing Into VR

In the last five years, virtual reality companies have raised $746 million in venture capital investments. Investors pumped 50% more money into VR startups during the first six months of 2015 than in all of 2014—a good sign for the industry.

image06Investors Are Bullish—But Still Cautious

Of the 163 VCs  in the space with at least one investment in a virtual reality company, almost 90% have invested in only one company. While some investors are relaying multiple investments and/or creating VR-specific funds, a majority of investors are taking a “wait-and-see” approach.

image04

Opportunities Exist in Education, Healthcare and Social

When examining the fundraising landscape in VR, notice a majority of the investments are heavily linked to the gaming/entertainment sectors—which makes sense, as gaming is likely to dominate the industry early. But this space (along with architecture/3D visualization) is highly competitive. For those daring enough to dream big in education, social experiences and/or healthcare, these are highly underfunded areas ripe for disruption.

image05

With many curious to explore VR, Greenlight VR created a comprehensive ecosystem map that outlines premier companies and sectors in the space.        

image09

Distribution Platform: Distributions platforms contain VR applications and content available for viewing, download, and/or purchase by consumers.

Examples: Oculus Share, Youtube

Peripherals: Companies that produce supplemental virtual reality accessories (cave, treadmill, wearable) to the headset.

Examples: Virtuix Omni, Sixense, The Void

Camera Capture Hardware: Companies who make specialty cameras and related equipment used in capturing virtual reality content.

Example: Go Pro

Stitching Software: Companies who make audio and video editing tools for virtual reality productions.

Example: Kolor

Head Mounted Display: Companies who manufacture virtual reality headsets and related equipment.

Examples: Oculus Rift, Gear VR, Cardboard

Engines: Companies who make 3D rendering and processing engines as an enabling input for virtual reality content.

Examples: Intel, Unity

Research Institutions: Organizations performing groundbreaking research related to virtual reality.

Example: Stanford University

Media: Virtual reality specific media outlets covering news, interviews, projects, podcasts, etc.

Example: Upload VR

Content (Cinematic): Companies that produce cinematic or experiential (non-game) virtual reality content.

Examples: Blue 44 Productions, Innerspace

Content (Gaming): Companies that produce virtual reality games.

Examples: Epic Games, Harmonix

Content (Healthcare): Companies that create games, videos, and applications, which have the explicit purpose of improving consumer’s health  and wellness.

Example: DeepstreamVR

Content (Social): Companies that primarily produce content that is enjoyed in a peer-to-peer social network in VR.

Example: Altspace VR

Content (Live/Action, Sports & Music Entertainment): Companies that create sports and musical content which can be either pre-recorded or live broadcast.

Examples: Jaunt, Fox Sports

Content (Enterprise): VR service or agency creating content for profit (architecture, real-estate, financial modeling, marketing campaigns, etc.)

Examples: Digitas, Patron, Mountain Dew, Arch Virtual

Content (Education): Companies engaged in producing virtual reality content to be educational.

Example: Expeditions

Content (Journalism): Companies who produce news, documentaries, and other journalistic virtual reality experiences.

Example: VRSE

[banner image courtesy of Shutterstock.com]


Howie is a senior research analyst for Greenlight VR. Greenlight VR is the industry leader in market intelligence for the global virtual reality economy. The company tracks more virtual and augmented reality companies than any other market data company — to date, over 1 million data points on thousands of companies. To learn more, visit www.greenlightvr.com

To get updates on Future of Virtual Reality posts, sign up here.

GSP 2015 Closing Ceremony: Meet 20+ Startups With Revolution on the Brain

Join Singularity University August 20 at San Jose’s California Theatre for the 2015 Graduate Studies Program Closing Ceremony—a night of inspiration, impact, and exciting pitches for 20+ new startups aiming to tackle the world’s biggest challenges. This event will sell out: Order your ticket now. Tickets purchased before August 10 (earlybird) are $50 and those purchased thereafter are $60. 

Singularity University’s Graduate Studies Program (GSP) is what happens when 80 entrepreneurs from 40 countries dedicate 10 weeks to big ideas. But GSP isn’t about ideas, it’s about innovation. And ideas only become innovation with action.

Muhammad Yunus at GSP 2015 Opening Ceremony.
Muhammad Yunus at GSP 2015 Opening Ceremony.

At this year’s opening ceremony, Nobel laureate and social entrepreneur, Muhammad Yunus told participants, “Every time I see a problem, I start a business.”

So, it’s only fitting: The closing ceremony is where the rubber meets the road.

GSP is organized around two guiding principles. The first is the world’s grand challenges—energy, environment, water, food, disasters, space, security, education, global health, governance, and prosperity—are solvable using the latest technologies. The second is innovation (idea + business) can do the heavy lifting.

GSP participants learn about the grand challenges and exponential technologies, and then they form teams. Each team chooses a grand challenge, an exponential technology, and develops a startup idea to tackle it. It’s an intense and rewarding process. The final exam? Pitching their startup to the public at the closing ceremony.

gsp-2015-closing-ceremony-21The closing ceremony is your chance to meet this year’s participants—they’ll man booths before the ceremony and mingle with guests after—and be the first to see and hear what remarkable new ideas and businesses were born at GSP 2015.

Exciting companies who got their start at GSP include Made In Space (recently sent the first 3D printer to the space station), Miroculus (micro-RNA analysis to more easily diagnose disease), and Matternet (building and testing autonomous delivery drones).

By all accounts, this year’s group is exceptional.

Thanks to generous funding from Google and supporters of SU’s Global Impact Competitions, 2015 is the first year the cost of the program was sponsored for all participants. It’s also the first year women outnumber men. And according to those working closest with them—GSP 2105 could not be more motivated, energetic, and ready to make a big impact.

What world-changing innovations have they dreamed up?

Join us at the GSP 2015 Closing Ceremony on August 20 when participants publicly pitch their ideas and kick off their quest to make the world a better place.

Image Credit: Shutterstock.com

Exponential Finance: Who Will Be the Instagram or Uber of Finance?

Come to Singularity Hub for the latest from the frontiers of finance and technology as we bring you coverage of Singularity University and CNBC’s Exponential Finance Summit.

I spent the week in New York City, attending Exponential Finance and thinking about the future of money. Many industries, including finance, will undergo big change in the coming years. We may see some fraction of the old guard go by the wayside, displaced by energetic, new upstarts. But which ones?

Instagram was acquired for a billion dollars the same year Kodak went bankrupt. Though Kodak invented the digital camera behind Instagram’s business model—they failed to fully embrace it and paid the price. Uber is a five-year-old transportation company worth $40 billion, and they don’t own a single car or bus.

exponential-finance-closer-11These companies, and others, demonstrate what Peter Diamandis, cofounder and executive chairman of Singularity University, calls the disruptive potential of digital technology: “Digitization means anything that becomes ones and zeros can be easily replicated and distributed around the world for free.”

This means relatively small organizations are rapidly rising up to take on big traditional players with little more than an app on a smartphone. So, what models are leading contenders to become the Instagram or Uber of finance?

The conference featured founders of mobile payments and mobile banking platforms, potentially revolutionary blockchain startups, and financial advisors embracing smart software to better serve their clients.

Take the fintech startup, Abra, for example.

Abra is exemplary of what happens when several digital technologies converge in one product. Combining an Uber-like peer-to-peer network with smartphone technology and blockchain, Abra literally stashes the cash in your pocket on your smartphone. From there, users can send cash as easy as they send a text.

All this happens without a bank.

Abra’s founder, Bill Barhydt, estimates we’re three years away from wireless carriers cycling off every feature phone—simple cell phones—sold when the iPhone and Android first came out. As smartphones become ubiquitous in the developing world, it’s possible many of the world’s unbanked billions in developing countries will skip traditional finance, a little like how they leapfrogged landlines for cell phones.

It’s a radical thought. But with Abra, it’s plausible that bank-free, digital cash will be a force to be reckoned with.

distributed-digital-ledgers-3
The blockchain technology powering Bitcoin has disruptive potential beyond cryptocurrencies.

Blythe Masters, former senior JP Morgan executive and CEO of Digital Asset Holdings, gave my favorite talk of the conference. It was, in no small sense, eye-opening. The topic? Blockchain, the distributed digital ledger underpinning cryptocurrencies like Bitcoin.

Masters suggests blockchain’s potential is massive—not just for cryptocurrencies, but anything of value. The same technology that records and confirms Bitcoin transactions can, in theory, do the same thing for “a will, a deed, a title, a license, intellectual property, an invention, or any type of financial instrument.”

It can automatically do this in real time across a vast network of distributed computers. Will we one day replace costly, centralized, and messy settlement processes with blockchain-enabled economic networks? Maybe so. Masters cautions much work remains, but that the technology is a very big deal.

Even as certain new technologies replace the old—others will prove powerful collaborative tools.

Ric Edelman says human financial advisors, for example, will reinvent themselves—adding services only humans can handle while making the most of artificial intelligence to better serve clients. Machines doing what machines do best, and humans doing what humans do best. Better together than either one alone.

These are just a sampling of this year’s talks.

But I think they hint at what’s to come: By more fully digitizing finance (parts of it are, of course, already digitized), we can supercharge commerce and reduce the cost of doing business.

exponential-finance-closer-2So, who will be the Instagram or Uber of finance?

Hard to say. There are thousands and thousands of contenders and a number of promising approaches and technologies. And we shouldn’t count out existing players either. Canon originally made cameras that used film, for example, but they transitioned to own a big chunk of the digital camera market too.

Some firms today may overcome institutional inertia and switch course when it makes sense to do so—enhancing their core business by embracing powerful emerging technologies as they come online.

But Canon also shows how crucial it is to stay a few steps ahead, and how difficult that can be. Even now, the digital camera market is shrinking as point-and-shoot cameras are replaced by smartphones.

Indeed, the most ground-shaking technologies may be the ones few anticipate today. CNBC’s Bob Pisani told the audience he fell in love with science fiction reading about robots as a kid. Then he held up his iPhone. Who would have believed 20 years ago we would soon fit the world’s knowledge in our pocket?

“I’m still 11 years old living in 1967—amazed at falling in love with robots and amazed at this [iPhone], because [it] wasn’t in those stories at that time,” Pisani said. “This is more fantastic even than stories. Reality has surpassed science fiction.” And he’s right. I love it when that happens. And it will again.

Image Credit: Shutterstock.com

Disrupt or Be Disrupted: Exponential Finance Is Coming to Wall Street This June

CNBC and Singularity University are partnering to present Exponential Finance, a two-day conference in New York City exploring the game-changing technologies poised to disrupt business and the financial world.

Technology moves as fast as the market. If you’re a leader or entrepreneur in most industries, you need to keep a close eye on a handful of related technologies to stay relevant. But finance isn’t most industries. Finance is, in a sense, all of them. Financial professionals invest in the broad economy across countries and sectors. You’re always looking ahead, forecasting the future, and deciding what’s next.

How do you keep a competitive edge sharp in today’s fast-paced markets?

Our minds didn't evolve to grasp information technology's exponential pace.
Our minds didn’t evolve to grasp information technology’s exponential pace.

Information is critical, to be sure, but the greatest financial minds have always paired information with something else—a unique and insightful view of the world. Our brains didn’t evolve to comprehend the modern pace of progress. We evolved to think linearly, but information technology moves exponentially.

To forecast the future, and to successfully invest in it, it’s critical to appreciate this concept.

Singularity University is on the cutting edge of understanding technological change, and Exponential Finance will showcase the disruptive new technologies poised to impact how you invest, bank, forecast market trends, and run your business. We’ll look at technologies like artificial intelligence, robotics and automation, 3D printing, blockchain/bitcoin, and driverless cars.

Beyond demonstrating how these technologies will upend entire industries—we’ll show you how they’re set to do so much faster than you think. An Oxford study suggests 47% of today’s jobs are likely to be automated in the next decade or two. And finance, an industry dependent on brains over brawn, is not immune. The same study found a 58% chance AI will replace financial advisors over the same period.

In the past, you could see the competition and had more time to prepare. Now, blue-chip slaying ideas and products go from garages to millions of users in a few years or even months. The average lifespan of an S&P 500 firm is down to just 15 years, and some 40% of the index will disappear over the next decade.

3D printer.
3D printers take digital designs and fabricate them layer-by-layer on the spot.

Who’s next? The trillion-dollar manufacturing industry is ripe for disruption by the digitization of products, computer-guided fabrication technologies (like 3D printing), and the further automation of industrial processes by a new generation of intelligent robots.

Driverless cars will make commutes easier, but they’ll also take the wheel from truckers and bus and taxi drivers. Insurers will need to figure out how to cover a growing fleet of everyday autos and 18-wheelers when humans no longer drive them.

Virtual reality may well revolutionize entertainment. But how will immersive virtual offices and meetings impact business travel? Might we be able to offer more effective training or help folks suffering from phobias with VR? What entirely new services will appear? And what traditional practices are doomed?

An array of exponential technologies are aimed at finance too.

You’ve heard of bitcoin. The popular yet volatile virtual currency is receiving plenty of attention. But experts say it isn’t about bitcoin—or any one virtual currency—it’s about the underlying blockchain technology. Blockchain may drive technologies impacting banking, payments, contracts, and more.

Crowdfunding allows entrepreneurs to test and fund early ideas with the crowd before seeking angel or venture capital. And equity-based crowdfunding is now available to non-accredited investors. Peer-to-peer lending performs a similar service for more run-of-the mill borrowing.

And even greater change is on the horizon. If the last fifty years were about the automation of factory labor—the next fifty will be about the automation of services.

Intelligent programs like IBM’s Watson are not only getting better at searching huge chunks of information and communicating what they find—they’re learning to learn all by themselves. Next generation AI may increasingly automate services we associate with humans, from medical diagnostics to financial advice.

Discover the latest on artificial intelligence, big data, robotics, 3D printing, digital medicine, autonomous vehicles, blockchain, and nanotech (among other topics) from tech-focused industry insiders and Singularity University’s luminary faculty. Envision the future with keynote talks by Peter Diamandis and Ray Kurzweil.

Meanwhile, offstage and in between talks, check out demos from over 40 groundbreaking technology companies and connect with C-level business leaders from leading firms across the industry—including private equity, venture capital, banking, quantitative modeling, insurance, financial advice, and others.

This year’s event takes place at the beautiful Conrad Hotel. Newly opened in 2012, the Conrad’s award-winning, contemporary space is just minutes from Wall Street. CNBC’s Bob Pisani will emcee the event, along with SU’s Peter Diamandis and Salim Ismail, and CNBC will be broadcasting live.

Henrik Thamdrup, IDA Chief Technology Officer, calls Exponential Finance “a must attend conference for finance industry executives who want to see what their business will look like ten years from now.” Join us in New York this June. Together, we can discover and begin to shape the future of finance.

Image Credit: Shutterstock.com

Summit Spain: We’re Going to Rewire the Way Your Brain Views the Future

0

There’s a story about Napoleon that goes something like this: At a state dinner, he gave his soldiers silver utensils and his court gold. But the guest of honor, the king of Siam, was given utensils of—aluminum.

Was it a not-so-subtle slight to the king? Not at all. Despite its relative abundance, aluminum was one of the rarest elements on Earth because it was hard to extract.

Fast forward a few decades, and a new extraction process using electrolysis had made aluminum abundant and cheap. Today, we use it everywhere. We cover takeout food in foil and toss it away without a thought.

SU-Summit-Spain-1
Sinfonietta de San Francisco de Paula performs at Summit Spain. “Exponential Prometheus” plays behind, showing a real-time, Kinect-enabled visualization of the conductor’s motions.

We may believe we have a resources problem—not enough water, not enough energy—but technology is a powerful liberator of resources. And if circumstances could change so quickly and radically in Napoleon’s time, they can change even faster today.

This was the message Rob Nail, CEO of Singularity University (SU), gave to participants, as he kicked off SU’s Summit Spain. It’s not about seeing things nobody’s seen before; it’s about shifting your perspective.

“In the next three days, one of our key goals is to rewire your brain about the future, the future of technology, and your role in it,” Nail said. “We’re going to freak you out, and then we’re going to excite you.”

It was no mistake Nail’s talk played so heavily on perspective and seeing things through a different lens. Much of it was dedicated to describing a world just next to ours—a world that can be seen only if you tilt your brain at just the right angle. Once you see it, though, there’s no turning back: The power of digitization, information, computing, and an exponential pace of progress yields unexpected outcomes.

Nail went over some classic examples. Kodak invented the digital camera, but failed to see how anyone would want one. A few decades later, Kodak was out of business, and Instagram, without producing a physical product, was worth a billion dollars. Airbnb doesn’t own a building. Uber has not a taxi to its name.

Yet both are now worth not a billion—but billions.

SU-Summit-Spain-3
Rob Nail at Summit Spain.

We all tend to be a little like Kodak.

We write off technology early in its development and then get blindsided when it reaches maturity. Nail wore Google Glass onstage, though he admits he doesn’t use it much. Why? He thinks it too is like Kodak.

Sure, Glass is clunky, looks geeky, and doesn’t do much. It isn’t worth $1,500. But is it a fad? Not necessarily. This is the difference between an exponential and linear view.

Nail suggested face recognition could be pretty cool for a speaker like him. He might see digital name tags hovering above audience members. He might fire up an app that reads expressions (asleep, engaged, annoyed) to gauge his performance. Another app might read pulse and temperature, correlating them to truthfulness in a negotiation.

“Obviously, never play poker with someone wearing Google glass,” he said.

The point? Glass is an early prototype.

Prototypes are often deceptive and cause us to get overexcited as we extrapolate to the finished product. But when the prototype doesn’t live up to the hype, we dismiss it—just as things get really interesting.

The next version of Glass will look different. It will integrate into your glasses completely. Eventually, perhaps into contact lenses. And once it hits it stride, we may wonder how we lived without it—like the first wave of mobile phone adoption or smartphones a little later.

Magic Leap is working on advanced augmented reality.
Magic Leap is working on advanced augmented reality.

What else is literally and figuratively opening a portal beyond our day-to-day world? How about virtual and augmented reality devices? Oculus went from garage to $2 billion acquisition in 18 months. Nail thinks they got lucky. Google Cardboard does much the same thing as Oculus Rift, with a smartphone and cardboard headset.

In other words, that’s how fast VR is moving.

The mysterious Magic Leap made headlines last year by closing a $550 million investment round. How’d they do it? A simple demo. The Magic Leap team set up a table and had investors view it through their tech. There were two coffee cups side-by-side—one real and one digital. They asked prospective investors to pick the real cup. Evidently, the task wasn’t too easy.

“And after that they invested, right?” Nail said.

One interesting outcome of all this: Augmented and virtual reality may make us bold, Nail noted. We can overcome fears through immersion. By facing up to flying, heights, spiders, or public speaking in the virtual realm, we can slowly ease our anxiety, eventually feeling less in the real world too.

And that, Nail told participants, is the point of the next few days too. Immerse yourself in a new world, face up to your anxiety, embrace your excitement, come up with new ideas, and then return to your life ready to be bold, to take action. We don’t have all the answers, he said. Good ideas come from anywhere.

Perhaps Summit Spain itself can generate a few. The event has 1,100+ attendees from 30 countries and 250 companies and organizations. Executives, thought leaders, policymakers and entrepreneurs. It’s a powerful group, Nail said, and an exceptional opportunity to conceive and share moonshots.

“We work to transform the individual, to transform the organization, and ultimately transform the world,” Nail told the audience. So, let’s get started.

Image Credit: Shutterstock; Singularity University; Magic Leap

Announcing SU Videos, a New Portal for an Inside Look of Singularity University

How will you positively impact billions of people?

At Singularity University, this question is often posed to program participants packed into the classroom at the NASA Research Park in the heart of Silicon Valley. Since 2009, select groups of entrepreneurs and innovators have had their perspective shifted to exponential thinking through in-depth lectures, deep discussions, and engagement in workshops.

Yet in that time, only a few thousand individuals from around the world have had the opportunity to transform SU’s insights on accelerating technologies into cutting-edge solutions aimed at solving humanity’s greatest problems. But not anymore.

Today, Singularity University launches a new, open portal called SU Videos, which will include a host of free videos spotlighting the various activities occurring on campus and throughout the greater global SU community.

SU VideosEach week, at least two new videos will be released from on-site events like the Executive Programs and the 10-week Graduate Studies Program as well as worldwide events, such as the Exponential Conference series and SU Summits. Additional offerings will include faculty, speaker, and alumni interviews along with inside looks at SU Labs, including the upcoming accelerator.

Together with Singularity Hub, this new portal further expands Singularity University’s commitment to providing publicly accessible content on the latest technological breakthroughs and growing a thriving online community passionate about leveraging technology to impact the world.

Let’s take a look at the programming available on SU Videos.

Singularity University is known for profiling world-class experts and entrepreneurs from across a broad range of domains, and the core of these speakers are the academic track chairs and core faculty. At each program, faculty provide up-to-date snapshots of the progress being made in science, technology, and related domains while framing where exponential growth is taking  in the near future. Here’s a clip of Peter Diamandis explaining in more detail how an abundant future is possible through technology:

As world-class speakers give talks at SU programs and events, they have the opportunity to engage one-on-one with participants. It’s often in these chats that core insights are conveyed. To capture these perspectives, SU Videos will feature interviews that delve deeper into specific topics.

At last year’s Exponential Finance Summit in New York, Staci Warden of the Milken Institute took the opportunity to talk about the disruptive power of bitcoin:

The current pace of technological change and disruption is mind boggling, and it only promises to accelerate. Frequently, participants at SU must shift their point of view rapidly in order to fully appreciate the depth and scope of what they’re hearing and seeing. This raises questions, some very simple and others deeply profound.

At the Executive Program in October of 2014, one participant wondering when computers will become conscious posed this question to Ray Kurzweil, who has been wrestling with this question for decades and wrote The Age of Spiritual Machines in 1999.

Singularity University seeks to educate, inspire, and empower entrepreneurs to impact billions of people, and one of the best ways to take an idea and successfully transform people’s lives is through launching a company. SU Labs was launched to incubate companies leveraging exponential technologies to tackle some of the greatest challenges facing the world today.

As new companies are formed as part of the Graduate Studies Program or from alumni, the new video portal with showcase these startups, giving a behind-the-scenes look at the individuals, their motivation behind their efforts, and the specific problems they are tackling.

At last year’s GSP, a team project turned startup called mPower aims to create a bi-directional electric vehicle charger that allow vehicles to discharge additional energy into a sustainable grid.

This is just a taste of the programming that the new video portal will provide.

So if you’re passionate about changing the world through technology, be sure to sign up for the SU Videos newsletter to stay abreast on the latest from Singularity University and its community across the world.

Google Pledges $3 Million to Singularity University to Make Graduate Studies Program Free of Charge

Google, a long-time supporter of Singularity University (SU), has agreed to a two-year, $3 million contribution to SU’s flagship Graduate Studies Program (GSP). Google will become the program’s title sponsor and ensure all successful direct applicants get the chance to attend free of charge.

Held every summer, the GSP’s driving goal is to positively impact the lives of a billion people in the next decade using exponential technologies. Participants spend a fast-paced ten weeks learning all they need to know for the final exam—a chance to develop and then pitch a world-changing business plan to a packed house.

Google is, of course, no stranger to moon shot thinking and the value of world-shaking projects.

Under the new agreement, Google will provide $1.5 million a year over the next two years. The funding covers the cost to attend for half of the 80 annual GSP participants. The other half, winners of SU’s Global Impact Competitions, will continue to be offered a free spot at the program as well.

Rob Nail, CEO and associate founder of Singularity University, was himself a graduate of an SU program. The experience, he says, was life changing. And now, more exceptional individuals will have the same opportunity.

“The new agreement with Google is an incredibly important pillar in our efforts to increase global access and diversity for qualified candidates, regardless of their ability to pay,” said Nail. “Google’s support will further help to break down barriers of access to the Silicon Valley network of technologists, business leaders, and investors.”

We’ve covered GSP since the beginning. It’s an energy-packed program.

Participants must first get a handle on the world’s biggest challenges—food, education, water, security, global health, energy, environment, poverty, and space—and then learn the technological levers to tip the balance. Using exponential technologies, like artificial intelligence, robotics, biotechnology, and 3D printing, they must develop a viable business plan to positively impact a billion people in ten years.

Made In Space's 3D printer installed on the International Space Station.
Made In Space’s 3D printer installed on the International Space Station.

Exciting new startups conceived at GSP include Made In Space, whose low-gravity 3D printer made history last year by fabricating the first manufactured products in space, and Miroculus, whose microRNA device enables quick, non-invasive, and affordable detection of diseases like cancer.

Bibak has developed an effective and inexpensive landmine detector. Hivematic created a beehive monitoring system to optimize hive conditions real-time and reduce the risk of colony failure. And Matternet is working on a network of drones to bring medicine and other critical items to remote rural villages in Africa where it’s a 20-mile walk to the nearest clinic and roads are unreliable.

These are just a few of the startups tracing their roots to GSP. And with Google’s support, SU will again throw its doors open to the best and brightest this summer. It promises to be a wild ride.

The 2015 Graduate Studies Program is scheduled for June 13 to August 23, 2015. Direct applications are due no later than February 28th at 2 PM PST. Go here to learn more about the new funding, and you can apply directly to GSP online at this link.

Image Credit: Singularity University

We Need a Manhattan Project for Cyber Security

0

Of the 6,494 words President Obama uttered in his January 2015 State of the Union address, only 108 of them were dedicated to the topic of our growing technological insecurity. Sure the leader of the free world has a lot on his plate, but the president’s legislative proposal to “enhance information sharing” and “mandate national data breach reporting” are likely to have a minuscule impact against a serious and growing problem.

State_of_the_Union
State of the Union address to Congress.

Indeed suggesting these measly offerings would make any meaningful difference in our global cyber security is akin to applying sunscreen and claiming it protects us from a nuclear meltdown — wholly inadequate to the scale and severity of the problem. It is time for a stone-cold somber rethinking of our current state of affairs. It’s time for a Manhattan Project for cyber security.

The major hacking incidents over the past few months, whether it was the Sony Pictures attack allegedly carried out by North Korea or the hundreds of millions of accounts penetrated at Target, Home Depot, and JP Morgan Chase purportedly by Russian organized crime make it clear that all our online data — whether financial, personal or intellectual — is at risk.

But we have a bigger problem. Computers run the world. They run our airports, our airplanes, our cars, our hospitals, our stock markets, and our power grids, and these computers too are shockingly vulnerable to attack. Though we’re racing forward at break neck speed to connect all the objects in our physical world — the tools we need to run our society — to the Internet, we still fundamentally do not have the trustworthy computing required to make it so.

We’ve wired the world, but failed to secure it.

Indeed, it has become plainly clear that we can no longer neglect the security, public policy, legal, ethical, and social implications of the rapidly emerging technological tools we are developing. We are morally responsible for our inventions and though our technological advances are proceeding at an exponential pace, our institutions of governance remain decidedly linear. There is a fundamental mismatch between the world we are building and our ability to protect it. Though we have yet to suffer the sort of game-changing calamitous cyber attack of which many have warned, why wait until then to prepare?

There are good examples in history where we as a society have brought together expertise in anticipation of catastrophic risk before it occurred. When it was discovered in 1939 that German physicists had learned to split the uranium atom, fears quickly spread throughout the American scientific community that the Nazis would soon have the ability to create a bomb capable of unimaginable destruction. Albert Einstein and Enrico Fermi agreed that President Franklin Delano Roosevelt had to be apprised of the situation.

Albert Einstein.
Albert Einstein.

Shortly thereafter, the Manhattan Project was launched, an epic secret effort of the Allies during World War II to build a nuclear weapon. Facilities were set up in Los Alamos, New Mexico, and Robert Oppenheimer was appointed to oversee the project. From 1942 to 1946, the Manhattan Project clandestinely employed over 120,000 Americans toiling around the clock and across the country at a cost of $2 billion. Those working on the Manhattan Project were dead serious about the threat before them. We are not.

While no sane person would equate the risks from the catastrophic impact of nuclear war with those involving 100 million stolen credit cards, we must surely recognize that the underpinnings of our modern technological society, embodied in our global critical information infrastructures, are weak and subject to come tumbling down either through their aging and decaying architectures, overwhelming system complexities or via direct attack by malicious actors. It’s high-time for a Manhattan Project for Cyber Security.

I’m not the first to suggest such an undertaking; many others have done so before, most notably in the wake of the September 11 attacks. At the time, a coalition of preeminent scientists wrote President George W. Bush a letter in which they warned, “The critical infrastructure of the United States, including electrical power, finance, telecommunications, health care, transportation, water, defense and the Internet, is highly vulnerable to cyber attack. Fast and resolute mitigating action is needed to avoid national disaster.”

Signatories to the letter included those from academia, think tanks, technology companies, and government agencies. These serious thinkers, not prone to hyperbole or exaggeration, warned that the grave risk of cyber attack was a real and present danger and called for the president to act immediately in creating a cyber-defense project modeled on the Manhattan Project. That call to action was in 2002.

Sadly, precious little has changed since then with regard to the state of the world’s cyber insecurity; if anything, the situation has grown worse. Sure, there have been nominal efforts but precious little substantive progress. What is America’s overarching strategy to protect ourselves from the rapidly emerging technological threats we face? We simply do not have one — a serious problem we may live to regret.

A real Manhattan Project for cyber security would draw together some of the greatest minds of our time, from government, academia, the private sector, and civil society. Serving as convener and funder, the government would bring together the best and brightest of computer scientists, entrepreneurs, hackers, big-data authorities, scientific researchers, venture capitalists, lawyers, public policy experts, law enforcement officers, and public health officials, as well as military and intelligence personnel. Their goal would be to create a true national cyber-defense capability, one that could detect and respond to threats against our national critical infrastructures in real time.

Manhattan Project emblem.
Manhattan Project emblem.

This Manhattan Project would help generate the associated tools we need to protect ourselves, including more robust, secure, and privacy-enhanced operating systems. Through its research, it would also design and produce software and hardware that were self-healing and vastly more resistant to attack and resilient to failure than anything available today. Such a project of national and even global importance would have the vision, scope, resources, budgetary support, and perhaps most importantly, a real sense of urgency required in order to make it a success.

By bringing together those at the forefront of their respective fields, this Manhattan Project would also be able to forecast the troubling waters ahead. Though today’s technologies have been a boon for illicit actors, they will pale in comparison to the breadth and scope of technological change that will rapidly unfold before us in the coming years. Soon a plethora of exponential technologies now just in their infancy, such as robotics, artificial intelligence, 3-D manufacturing, and synthetic biology, will be upon us, and with them will come concomitantly profound, perhaps even life-altering, opportunities for good, but also for harm. In this exponentially accelerating world the ability of a single person to affect many — for good or evil — is now scaling exponentially, with implications for our common security.

Despite this, we plod forward, adopting newer, brighter technologies, each promising to solve a new problem or deliver a particular convenience. The problem is not that technology is bad; in fact, science and technology hold the promise of profound benefit to humanity. The problem, as we have seen, is that those with technological know-how, be they criminals, terrorists, or rogue governments, can use their knowledge to exploit an exponentially growing portion of the general public to its detriment.

Last night President Obama acknowledged “no foreign nation, no hacker, should be able to shut down our networks, steal our trade secrets or invade the privacy of American families.” But encouraging Congress to pass legislation on identity theft and data breach notifications is not nearly enough. There is a gathering storm before us. The technological bedrock on which we are building the future of humanity is deeply unstable and like a house of cards can come crashing down at any moment. It’s time to build greater resiliency into our global information grid in order to avoid a colossal system crash. If we are to survive the progress offered by our technologies and enjoy their abundant bounty, we must first develop adaptive mechanisms of security that can match or exceed the exponential pace of the threats before us. There’s no time to lose.

future-crimes-marc-goodmanAdapted from the forthcoming book Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It by Marc Goodman, available February 24. Order your copy today.

Marc Goodman has spent a career in law enforcement and technology. He has served as a street police officer, senior adviser to Interpol and futurist-in-residence with the FBI. As the founder of the Future Crimes Institute and the Chair for Policy, Law, and Ethics at Silicon Valley’s Singularity University, he continues to investigate the intriguing and often terrifying intersection of science and security, uncovering nascent threats and combating the darker sides of technology. Follow him on Twitter at @FutureCrimes.


[vc_message style=”square” message_box_color=”grey” icon_fontawesome=”fa fa-amazon”]We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites.[/vc_message]

Have a World Changing Startup? Apply Now For Inaugural SU Labs Accelerator

0

The Fall ’15 Class of Singularity University’s Startup Accelerator launches September 28th. Learn about the SU Startup Accelerator here, and submit your application to take part by June 30th. 

To change the world, it helps to have a good idea—but good ideas are a dime a dozen. The hard part is sharpening your idea and executing on it. It’s a long road from idea to execution, but how much time the trip takes depends on your speed.

This September, Singularity University Labs is launching the first accelerator program for startups tackling the world’s grand challenges with exponential technology.

What makes the SU Labs accelerator different from other startup accelerators? Focus. It isn’t just about iterating the next gadget or making an impact in only technology. SU Labs has its sights set on the moon, on changing the world by solving truly global issues with cutting-edge technology.

The 10-week accelerator offers a cohort of likeminded peers focused on making impact with the latest tech; direct access to accomplished experts, mentors, and Fortune 500 corporate partners; and a vibrant work space at NASA Research Park.

The program is looking to sign up founders with an early stage prototype of a product or service. After an introductory bootcamp week, participants will be guided through four two-week themed sprints aimed at further refinement.

Every two weeks, each team will pitch their startup to a group of experts for constructive feedback in preparation for the week 10 demo day—the chance to pitch to top Silicon Valley investors and corporate partners.

If you’re an entrepreneur with an idea that just might change the world, apply now to put the best Singularity University has to offer behind your efforts. Together, we’ll build products and organizations to positively impact humanity at scale.

These 11 Technologies Will Go Big in 2015

If you thought 2014 was thrilling, here’s a look at what I’m most excited about for 2015. Here are 11 of the most exciting new technologies moving from deceptive to disruptive this year.

Magic Leap is working on advanced augmented reality.
Magic Leap is working on advanced augmented reality.

1. Virtual reality: Expect a lot more action on the virtual and augmented reality front. 2014 saw the $2B acquisition of Oculus Rift by Facebook. In 2015, we’ll see action from companies like Philip Rosedale’s High Fidelity (the successor to Second Life), immersive 3D 360-degree cameras from companies like Immersive Media (the company behind Google’s Streetview), Jaunt, and Giroptic. Then there are game changers like Magic Leap (in which Google just led a $542 million investment round) that are developing technology to “generate images indistinguishable from real objects and then being able to place those images seamlessly into the real world.” Oculus, the darling of CES for the past few years, will be showing its latest Crescent Bay prototype and hopefully providing a taste of how its headset will interact with Nimble VR’s hand- and finger-tracking inputs. Nine new VR experiences will be premiering at the Sundance Film Festival this year, spanning from artistic, powerful journalistic experiences like Project Syria to full “flying” simulations where you get to “feel” what it would be like for a human to fly.

Fellow Robots autonomous employee, OSHbot.
Fellow Robots autonomous employee, OSHbot.

2. Mass-market robots: Late 2013 saw the acquisition by Google of eight robotics companies. 2015 is going to see the introduction of consumer-friendly robots in a store near you. Companies like SU’s Fellow Robots are creating autonomous “employees” called OSHbots that are roaming the floors of Lowe’s and helping you find and order items in their store. We’ll also see Softbank’s Pepper robot make the leap from Japan to enter U.S. retail stores. Pepper uses an emotion engine and computer vision, to detect smiles, frowns, and surprise, and it uses speech recognition to sense the tone of voice and to detect certain words indicative of strong feelings, like “love” and “hate.” The engine then computes a numeric score that quantifies the person’s overall emotion as positive or negative to help the store make a sale. At CES, Paris-based start-up Keecker will show off a robot that doubles as a movie projector after raising more than $250,000 for the idea on Kickstarter.

3. Autonomous vehicles: In 2015, we will see incredible developments in autonomous vehicle technology. Beyond Google, many major car brands are working on autonomous solutions. At CES, Volkswagen will bring the number of car brands on display into double figures for the first time this year. Companies like Mercedes say they will show off a new self-driving concept car that allows its passengers to face each other. BMW plans to show how one of its cars can be set to park itself via a smartwatch app. And, Tesla, of course, has already demonstrated “autopilot” on its Model D.

Zano, a Kickstarter-backed quadcopter.
Zano, a Kickstarter-backed quadcopter.

4. Drones everywhere: 2015 will be a big year for drones. They are getting cheaper, easier to use, more automated, and are now finding more useful and lucrative applications. These “drones” include everything from the $20 toys you can buy at RadioShack to the high-powered $1000+ drones from companies like DJI and the super-simple and powerful Q500 Typhoon. These consumer drones equipped with high quality cameras and autopilot software are military-grade surveillance units now finding use in agriculture, construction and energy applications. Drones get their own section of CES in 2015 with a new “unmanned systems” zone. Wales’ Torquing Group could provide one of its highlights with Zano, a Kickstarter-backed quadcopter small enough to fit in your hand but still capable of high-definition video capture.

5. Wireless power: “Remember when we had to use wires to charge our devices? Man, that was so 2014.” Companies like uBeam, Ossia and others are developing solutions to charge your phones, laptops, wearables, etc. wirelessly as you go about your business. And this isn’t a “charging mat” that requires you to set your phone down… imagine having your phone in your pocket, purse, or backpack, and it will be charging as you walk around the room. Companies are taking different approaches as they develop this technology (uBeam uses ultrasound to transfer energy to piezoelectric receivers, while Ossia has a product called Cota that uses an ISM radio band, similar to WiFi, to transfer energy and data). Look out for a key “interface moment” in 2015 that will take wireless power mainstream.

IBM's Watson, seen here defeating two Jeopardy champs, is now available to businesses and developers.
IBM’s Watson, seen here defeating two Jeopardy champs, is now available to businesses and developers.

6. Data and machine learning: 2014 saw data and algorithm driven companies like Uber and Airbnb skyrocket. There is gold in your data. And data-driven companies are the most successful exponential organizations around. In 2015, data collection and mining that data will become more turn-key. Platforms like Experfy, for example, allow you to find data scientists who will develop algorithms or machine learning solutions for your business/project. Larger companies can explore partnering with IBM’s Watson Ecosystem, which is creating a community of everyone from developers to content providers to collaborate and create the next generation of cognitive apps. Companies built around algorithms, like Enlitic (a company that uses machine learning to detect tumors and make medical imaging diagnostics faster and cheaper), will become much more prevalent and common in 2015.

7. Large-scale genome sequencing and data mining: We are at the knee of the curve of human genome sequencing. In 2015, we will see explosive, exponential growth in genomics and longevity research. As the cost of sequencing a single human genome plummets by orders of magnitude (now around $1,000) and the amount of useful information we glean from mining all that data skyrockets. At Human Longevity, Inc. (HLI), a company I co-founded, we are aiming to sequence 1 million to 5 million full human genomes, microbiomes, metabalomes, proteomes, MRI scans, and more by 2020. We’re proud to have Franz Och, formerly the head of Google Translate, as the head of our machine learning team to mine the massive amount of data so that we can learn the secrets to extending the healthy human lifespan by 30-40 years.

Sensors are enabling "smart" infrastructure, wearables, even cutting-edge medical diagnostics.
Sensors are enabling “smart” infrastructure, wearables, even cutting-edge medical diagnostics.

8. Sensor explosion: In 2015, expect “everything” to be “smart.” The combination of sensors and wearables, increased connectivity, new manufacturing methods (like 3D printing), and improved data mining capabilities will create a smart, connected world—where our objects, clothes, appliances, homes, streets, cars, etc., etc. will be constantly communicating with one and other. Soon, there will be trillions of sensors throughout the world. These sensors won’t just power smart ovens and sweatshirts—the same technology will allow companies like Miroculus to create a “microRNA detection platform that will constantly diagnose and monitor diseases at the molecular level.” Sensors are going to be taking over CES this year. Among the many applications: a shirt that can read your heart-rate by Cityzen Sciences, a device from HealBe that can automatically log how many calories you consume, a garden sprinkler system from Blossom that can decide when to switch on based on weather forecasts, pads for the pantry from SmartQSine that allow you to keep track of how much of your favorite foods are left, a pacifier from Pacifi that sends the baby’s temperature to the parent’s smartphone, and a new home security system from Myfox with tags you can attach to a door or window that trigger alarms before a break-in is attempted.

Voice-controlled systems continue to improve.
Voice-control systems continue to improve.

9. Voice-control and “language-independent” interaction: Using our fingers to operate smartphones/technology was “so 2014.” In 2015, we will see significant advances in voice-controlled systems and wider mass market adoption. Think the first steps towards a Jarvis-like interface. Siri, Google Now, Cortana, and other voice-control systems are continuing to get better and better—so much so that they are being almost seamlessly integrated into our technology, across platforms. Soon, almost all connected devices will have voice-control capabilities. Companies like Wit.ai are creating their own open-source natural language interfaces for the Internet of Things and for developers to incorporate into their apps, hardware, and platforms. Jarvis-like systems like the Ubi and Jibo, plus IBM’s Watson and XBOX One Kinect, already allow natural language interactions and question/answer like commands. Then, Google Translate, Skype Translate, and others are creating software that allows real-time translation between languages, further eliminating cultural and geographic barriers—the Star Trek universal translator is just around the corner!

3D Systems is 3D printing beautiful designs in sugar and other edible materials.
3D Systems is 3D printing beautiful designs in sugar and other edible materials.

10. 3D printing: 3-D printing will continue to grow rapidly in 2015 as the number of applications increase and as printers, scanners, and CAD modeling software become more accessible, cheaper, and easier to use. 2014 saw the first 3D printed object in space, by SU company, Made In Space. 3D Systems continues to innovate around the clock and is releasing a plethora of exciting things in 2015, including 3D printed food and customized chocolates. Three years ago there were just two 3D printing firms at CES. This year there promises to be more than 24.

11. Bitcoin: While 2014 was a rough year for bitcoin (it was ranked the “worst performing currency”), I am optimistic that 2015 will be a better year for the cryptocurrency. Weak currencies and uncertainty in the global economy, emerging smartphone markets in developing countries (billions coming online for the first time), better “interfaces,” and more commercial adopters who accept bitcoin as a form of payment will all play a role in a brighter bitcoin future. Finally, it’s worth noting that Apple Pay will ultimately teach an entire generation how to navigate life without cash…making the transition to bitcoin natural and easy.

Image Credit: Nan Palmero/FlickrMagic Leap; Fellow Robots; Zano/Kickstarter; IBMSensilk; Bhupinder Nayyar/Flickr; 3D Systems

Can AI save us from AI?

0

Nick Bostrom’s book Superintelligence might just be the most debated technology book of the year. Since its release, big names in tech and science, including Stephen Hawking and Elon Musk, have warned of the dangers of artificial intelligence.

superintelligence-nick-bostrom-2Bostrom says that while we don’t know exactly when artificial intelligence will rival human intelligence, many experts believe there is a good chance it will happen at some point during the 21st century.

He suggests that when AI reaches a human level of intelligence, it may very rapidly move past humans as it takes over its own development.

The concept has long been discussed and is often described as an “intelligence explosion”—a term coined by computer scientist IJ Good fifty years ago. Good described the process like this:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.”

Bostrom says that once this happens, if we aren’t prepared, superintelligent AI might wipe us out as it acts to achieve its goals. He draws the analogy to humans redeveloping various ecosystems and, in the process, causing animal extinctions.

“If we think about what we are doing to various animal species, it’s not so much that we hate them,” Bostrom told IEEE Spectrum. “For the most part, it’s just that we have other uses for their habitats, and they get wiped out as a side effect.”

In one scenario Bostrom outlines, an AI programmed to make as many paper clips as possible might move against humans as it calculates how likely we are to turn it off. Or it might view us as a source of atoms for more paper clips.

Broader and seemingly beneficial goal setting might backfire too.

For example, a machine with the goal of making humans happy might decide the best way to do this is by implanting electrodes in our brains’ pleasure centers—this “solves” the problem, but undoubtedly not to the liking of most implantees.

How then can we reap the vast problem-solving powers of superintelligent AI while avoiding the risks it poses?

One way might be to develop artificial intelligence in a “sandbox” environment, limiting its abilities by keeping it disconnected from other computers or the internet. But Bostrom thinks a superintelligent AI might easily get around such controls—even perhaps, by being on its best behavior to fool its handlers into believing it’s ready for the real world.

Instead, according to Bostrom, we should focus on the AI’s motivations. This is, as outlined before, a very tricky problem. Not least because human values change over time. In short, we aren’t smart enough to train a superintelligent AI—but it is.

Bostrom suggests we program a superintelligent AI to figure out what we would have asked it to do if we had millennia to ponder the question, knew more than we do now, and were smarter.

“The idea is to leverage the superintelligence’s intelligence, to rely on its estimates of what we would have instructed it to do,” Bostrom suggests. (Check out this IEEE Spectrum podcast for a good synopsis of Bostrom’s argument.)

Why think about all this in such detail now? According to Bostrom, while the risk is huge, so is the payoff.

“All the technologies you can imagine humans developing in the fullness of time, if we had had 10,000 years to work on it, could happen very soon after superintelligence is developed because the research would then be done by the superintelligence, which would be operating at digital rather than biological timescales.”

So, what do you think? Can artificial intelligence save us from artificial intelligence? 

Image Credit: Shutterstock.com; Nick Bostrom/Amazon

Summit Europe: To Anticipate the Future Is to Abandon Intuition

0

In the evolution of information technology, acceleration is the rule—and this fact isn’t easy for the human brain to grasp.

You’d be hard pressed to find someone who isn’t at least intuitively aware of the speed of information technology. We’ve become used to the idea that the performance of our devices has regularly doubled for the last few decades.

What is less intuitive is the rate at which this doubling results in massive leaps. The price performance of today’s devices is a billion times better than computers in 1980. But even this is not completely outside the realm of immediate experience. We know even our smartphones are much more capable than the first computers we owned.

It’s here, however, that intuition fails us completely. Over the course of the two-day Summit Europe in Amsterdam this week, two reasons emerged from the slew of talks.

exponential-chess
Exponential doublings start slow before making extremely rapid progress in just a few steps.

First, the exponential growth in computing isn’t just something that’s happened—it also appears likely to continue in the foreseeable future. Ray Kurzweil notes that, although the current cycle has been driven by integrated circuits, it won’t end when we’ve exhausted their potential.

Exponential progress has been surprisingly consistent from the first computers—with one technology picking up where the last one left off. (Kurzweil lists earlier computing technologies as electromechanical, relay, vacuum tube, and transistors.)

As exponential growth continues, we can expect another billion-fold improvement in the coming decades.

Second, computing’s exponential pace isn’t confined to the device in your pocket, lap, or desk. The power of digital information is infiltrating other fields and driving them at a similarly torrid pace.

The key to anticipating—if not precisely predicting—the future of technology is understanding these exponential curves. At first they double as small numbers (.01 to .02 to .04, etc.) and appear slow and linear. This is deceptive. When the doubling hits one, two, and so on—it takes a mere 30 steps to reach a billion.

And critically, half of all exponential growth happens in the last step.

Anyone basing their predictions on an exponential trend will, by definition, look like a hack and a genius in short succession. Why? Because whatever process they’re predicting will only be halfway to the level of progress predicted—and therefore still appear distant—at just the moment before it comes to fruition, which ultimately will prove them correct.

Take a breath to appreciate what that means. To stay ahead of an exponential curve, you have to make plans that few of your peers will fathom until the very last moment. It’s easy to see how much pressure and criticism that inevitably invites.

It is small wonder that few people—even if they actually appreciate exponential trends—are able to not only employ this philosophy but stick by their convictions. Our brains and social structures are simply not built to appreciate acceleration.

This is why, today, we are often skeptical of and surprised by technology. And Summit Europe was nothing if not a tour of the technologies that we’ll be most skeptical of and surprised by in the coming years.

summit-europe-tech

What are these? Artificial intelligence, computing and networks, robotics, 3D printing, genomics, and health and medicine. It would be naive to say many of these fields have not been called revolutionary before now. It would be equally misguided to underestimate their power to do great things in the future.

Because these fields, in one capacity or another, are hitched to exponentially growing computing power—they may look disappointingly linear (maybe for a long time) before becoming suddenly, precipitously surprising.

Are we poised to wrest biology from nature? To develop machines with intelligence that rivals or outstrips our own? To manipulate the material world on molecular scales? If such predictions sound outlandish—you have a human brain.

But don’t let that blind you to the more general rule: As the world is increasingly digitized, many technologies you think belong to the distant future will arrive much sooner than expected.

Image Credit: DieselDemon/Flickr

Summit Europe 2014: Tech’s Pace Is Like a Dozen Gutenberg Moments Happening at the Same Time

0

From sunny San Diego last week for the Exponential Medicine conference to the rainy and overcast Netherlands for Summit Europe this week—I’m on the road with Singularity University. At the DeLaMar theater in central Amsterdam, some 900 participants are here to attend the largest event in SU’s history.

Whereas Exponential Medicine took the theme of exponential technology and applied it to health and medicine, Summit Europe will drill down into the concepts and consequences of our exponential pace.

SU’s global ambassador and founding executive director, Salim Ismail, set the stage.

We’re at an inflection point, he said, where we are digitizing and augmenting the human experience with technology. That digitization is accelerating change. The question is: How can individuals and society, more generally, navigate it?

Five hundred years ago, Johannes Gutenberg’s printing press freed information as never before. Ismail framed the current pace of technology as Gutenberg to the extreme, “We’re having about a dozen Gutenberg moments all at the same time.”

It’s true…currently, I’m listening to experts communicate new and novel ideas. I take notes on a laptop, connect to the internet, find images, load the article—and publish (for free). Ideas from the mouths of the few to the brains of the many in mere moments.

This flow of information is driving idea cross-pollination and innovation on a massive scale.

Listening to Ismail’s talk, I was reminded of a quote. Generally attributed to Elbert Hubbard, it goes like this, “The world is moving so fast these days that a man who says it can’t be done is generally interrupted by someone doing it.”

I wasn’t struck by the sentiment—a fairly common one around these parts—but the period. Hubbard was a denizen of the 19th and early 20th centuries (1856-1915), but the sentence feels so modern, Peter Diamandis could have said it yesterday.

Our sense of cultural and technological acceleration isn’t new.

Hubbard lived at a time when scientific revolutions were common currency. He bore witness to Darwin, Einstein, Edison, and Ford. In his era, humankind flipped from a species preoccupied with feeding itself to one in which a tiny fraction feeds the rest (less than 2% in the US today)—freeing tens of millions to do myriad other tasks.

However, if you believe we’re progressing at an exponential rate—Hubbard’s words are not just doubly true today; they’re orders of magnitude more so—and that translates into Ismail’s dozen simultaneous Gutenberg moments.

Then as now, people were excited and anxious in equal measure. Ismail showed a video of someone riding in one of Google’s self-driving cars as it navigated an obstacle course at top speed. The rider is amazed and a little nervous—the video ends with him letting out a little involuntary scream. Today, the world is letting out a little collective Google scream.

Will we let the latest technology take the wheel? Perhaps not at first. But as a car (or any technology) proves it can reliably handle something normally entrusted to humans—it will become as accepted, mundane, and utterly useful as an elevator.

We’ll be covering Summit Europe today and tomorrow, so stay tuned!

Image Credit: Willi Heidelbach/Wikimedia Commons

Exponential Medicine 2014 Conference Kicks Off in San Diego

0

The weather is fine and the future on display. I’m in San Diego covering Singularity University’s Exponential Medicine conference through Wednesday. The four-day event kicked off yesterday at the Hotel Del Coronado in San Diego, where Singularity University (SU) faculty set the frame of reference—technology’s exponential pace.

A theme seemed to run through the first talks. As many industries increasingly embrace the digital age, healthcare needs to catch up. But healthcare will catch up. According to Peter Diamandis, cofounder and SU executive chairman, no industry or field stands to face more disruption or greater reinvention than healthcare in the next decade.

“We’ll make each individual a CEO of their own health,” Diamandis said.

What’s the driving force? Diamandis noted the 100-billion-fold improvement in computing power brought on by the last forty years of Moore’s Law. But it isn’t just computing that’s special. The convergence of computing with cheaper, more powerful sensors is set to deliver a host of health technologies.

Early GPS units cost on the order of $120K and were sizable beasts. Now they come stock in your average smartphone. The gyro on that same phone? That was the size of a podium, consumed a few hundred watts of power, and cost $250K on the Space Shuttle. Now, it’s the size of a fingernail, costs a buck, and at one point, guided a $30 mini-drone through a series of autonomous flips on stage during a talk on robotics by Singularity University’s Dan Barry.

And of course, health sensors measuring an increasing list of vital signs are undergoing a similar transformation.

danb-xmed

Sensor-driven health technologies are just now getting underway—and I’ll likely have more on them later—but as Daniel Kraft, Exponential Medicine’s founding executive director noted, more regular measurement of our body’s vital signs will shift healthcare from intermittent and reactive to continuous and predictive.

“We’re in an era of prescribing apps and devices,” Kraft said.

But it isn’t just sensor-driven healthcare. Andy Christensen, 3D Systems VP of Personalized Surgery and Medical Devices, described the power of medical 3D printing—personalized implants, better preparation through simulated surgery, tailored templates for complex operations, and custom braces and prosthetics that could be made no other way.

Christenson reminded the audience of the oft quoted phrase, in 3D printing “complexity is free.”

What else? According to Raymond McCauley, SU biotech track chair, we’re undergoing a genetic engineering renaissance. In vitro fertilization already allows for early genetic testing and embryo selection, but rapidly advancing techniques, like CRISPR, allow us to knock out, knock in, repair, and edit the genome in “drag and drop” genetic engineering.

What might these capabilities yield in the future? Researchers already know that some people lacking particular genes are more resistant to flu, HIV, cardiac, and Alzheimer’s disease and have stronger bones and leaner muscles. McCauley said knocking out these genes in the rest of us could one day confer similar advantages.

The stage is set for an interesting few days with talks ranging from artificial intelligence, big data, and connected health to genomics, regenerative medicine, and longevity.

Stay tuned or tune into the live stream here.

Image Credit: Shutterstock

This Week’s Awesome Stories from Around the Web (Through Nov 8)

0

[intro]In light of Halloween, a tragic failure for the space industry, and another voting cycle in the U.S., this week saw a proliferation of articles all about different kinds of fear.[/intro]

Here at Hub, we avoid fearmongering as a policy simply because, looking past what occupies most of the news cycle, the world and our lives are getting better…MUCH better. Still it’s worth exploring where fears arise, what the underlying issues are, and ultimately how we will overcome them. Enjoy this week’s stories!

AI: Don’t Fear Artificial Intelligence
Adam Elkus | Slate
“Thierer diagnoses six factors that drive technopanics: generational differences that lead to fear of the new, “hypernostalgia” for illusory good old days, the economic incentive for reporters and pundits to fear-monger, special interests jostling for government favor, projection of moral and cultural debates onto new technologies, and elitist attitudes among academic skeptics and cultural critics disdainful of new technologies and tools adopted by the mass public. All of these are perfectly reasonable explanations, but a seventh factor also matters: the psychological consequences of human dependence on complex technology in almost all areas of modern life.”

SPACE: How safe can we really make space for future tourists?
Tim Bowler | BBC
“‘I’m in the private space business because I don’t feel like waiting for something to happen.'”

BIOLOGY: Why Scientists Think Completely Unclassifiable and Undiscovered Life Forms Exist
Jason Koebler | Motherboard
“‘This quest of synthetic biologists to build radically novel organisms also offers possible models for unusual varieties of life that may be sought in nature,” they wrote. “The discovery of new building blocks and organisms from a new domain would likely have major implications for biotechnology, agriculture, human health, and synthetic biology efforts.'”

SECURITY: The other Ebola fear: Your civil liberties
David Kravets | Ars Technica
“‘This ain’t gonna be over until the global community can put the resources into the area that is medically underserved and provide the basic infrastructure to beat this inevitable thing of mother nature.'”

PSYCHOLOGY: The truth about the paranormal
David Robson | BBC
“‘It’s easy to think of yourself as the one holding the rational cards, but it’s wiser to understand that every one of us are going to be prone to those mistakes when we feel like we are lacking control,’ says Whitson. ‘We should all be ready to evaluate our assumptions more thoughtfully.'”

SOCIETY: Psycholitics: The Science of Why You Vote the Way You Do
Brian Resnick, Mauro Whiteman, Reena Flores | CityLab
“Here’s how researchers got the Swiss kids to predict that President Obama would win in 2008: They showed them pictures of the candidates and asked, ‘Who would you rather be captain of your ship?’ That was all that it took for the children, aged 5 to 13, to guess the winner of this election to a degree greater than random chance.”

INNOVATION: Forget the lone genius, it’s copycats who drive progress
Kat McGowan | Aeon
“Copying is the mighty force that has allowed the human race to move from stone knives to remote-guided drones, from digging sticks to crops that manufacture their own pesticides. Plenty of animals can innovate, but no other species on earth can imitate with the skill and accuracy of a human being. We’re natural-born rip-off artists. To be human is to copy.”

[image: Donnie Nunley/Flickr]

An AI-Designed Drug Is Moving Toward Approval at an Impressive Clip

0

For the first time, an AI-designed drug is in the second phase of clinical trials. Recently, the team behind the drug published a paper outlining how they developed it so fast.

Made by Insilico Medicine, a biotechnology company based in New York and Hong Kong, the drug candidate targets idiopathic pulmonary fibrosis, a deadly disease that causes the lungs to harden and scar over time. The damage is irreversible, making it increasingly difficult to breathe. The disease doesn’t have known triggers. Scientists have struggled to find proteins or molecules that may be behind the disease as potential targets for treatment.

For medicinal chemists, developing a cure for the disease is a nightmare. For Dr. Alex Zhavoronkov, founder and CEO of Insilico Medicine, the challenge represents a potential proof of concept that could transform the drug discovery process using AI—and provide hope to millions of people struggling with the deadly disease.

The drug, dubbed ISM018_055, had AI infused throughout its entire development process. With Pharma.AI, the company’s drug design platform, the team used multiple AI methods to find a potential target for the disease and then generated promising drug candidates.

ISM018_055 stood out for its ability to reduce scarring in cells and in animal models. Last year, the drug completed a Phase I clinical trial in 126 healthy volunteers in New Zealand and China to test its safety and passed with flying colors. The team has now described their entire platform and released their data in Nature Biotechnology.

The timeline for drug discovery, from finding a target to completion of Phase I clinical trials, is around seven years. With AI, Insilico completed these steps in roughly half that time.

“Early on I saw the potential to use AI to speed and improve the drug discovery process from end to end,” Zhavoronkov told Singularity Hub. The concept was initially met with skepticism from the drug discovery community. With ISM018_055, the team is putting their AI platform “to the ultimate test—discover a novel target, design a new molecule from scratch to inhibit that target, test it, and bring it all the way into clinical trials with patients.”

The AI-designed drug has mountains to climb before it reaches drugstores. For now, it’s only shown to be safe in healthy volunteers. The company launched Phase II clinical trials last summer, which will further investigate the drug’s safety and begin to test its efficacy in people with the disease.

“Lots of companies are working on AI to improve different steps in drug discovery,” said Dr. Michael Levitt, a Nobel laureate in chemistry, who was not involve in the work. “Insilico…not only identified a novel target, but also accelerated the whole early drug discovery process, and they’ve quite successfully validated their AI methods.”

The work is so “exciting to me,” he said.

The Long Game

The first stages of drug discovery are a bit like high-stakes gambling.

Scientists pick a target in the body that likely causes a disease and then painstakingly design chemicals to interfere with the target. The candidates are then scrutinized for a myriad of preferable properties. For example, can it be absorbed as a pill or with an inhaler rather than an injection? Can the drug reach the target at high enough levels to block scarring? Can it be easily broken down and eliminated by the kidneys? Ultimately, is it safe?

The entire validation process, from discovery to approval, can take more than a decade and billions of dollars. Most of the time, the gamble doesn’t pay off. Roughly 90 percent of initially promising drug candidates fail in clinical trials. Even more candidates don’t make it that far.

The first stage—finding the target for a potential drug—is essential. But the process is especially hard for diseases without a known cause or for complex health problems such as cancer and age-related disorders. With AI, Zhavoronkov wondered if it was possible to speed up the journey. In the past decade, the team built several “AI scientists” to help their human collaborators.

The first, PandaOmics, uses multiple algorithms to zero in on potential targets in large datasets—for example, genetic or protein maps and data from clinical trials. For idiopathic pulmonary fibrosis, the team trained the tool on data from tissue samples of patients with the disease and added text from a universe of online scientific publications and grants in the field.

In other words, PandaOmics behaved like a scientist. It “read” and synthesized existing knowledge as background and incorporated clinical trial data to generate a list of potential targets for the disease with a focus on novelty.

A protein called TNIK emerged as the best candidate. Although not previously linked to idiopathic pulmonary fibrosis, TNIK had been a target associated with multiple “hallmarks of aging”—the myriad broken down genetic and molecular processes that accumulate as we get older.

With a potential target in hand, another AI engine, called Chemistry42, used generative algorithms to find chemicals that could latch onto TNIK. This type AI generates text responses in popular programs like ChatGPT, but it can also dream up new medicines.

“Generative AI as a technology has been around since 2020, but now we are in a pivotal moment of both broad commercial awareness and breakthrough achievements,” said Zhavoronkov.

With expert input from human medicinal chemists, the team eventually found their drug candidate: ISM018_055. The drug was safe and effective at reducing scarring in the lungs in animal models. Surprisingly, it also protected the skin and kidneys from fibrosis, which often occurs during aging.

In late 2021, the team launched a clinical trial in Australia testing the drug’s safety. Others soon followed in New Zealand and China. The results in healthy volunteers were promising. The AI-designed drug was readily absorbed by the lungs when taken as a pill and then broken down and eliminated from the body without notable side effects.

It’s a proof of concept for AI-based drug discovery. “We are able to demonstrate beyond a doubt that this method of finding and developing new treatments works,” said Zhavoronkov.

First in Class

The AI-designed drug moved on to the next stage of clinical trials, Phase II, in both the US and China last summer. The drug is being tested in people with the disease using the gold standard of clinical trials: randomized, double-blind, and with a placebo.

“Many people say they are doing AI for drug discovery,” said Dr. Alán Aspuru-Guzik at the University of Toronto, who was not involved in the new study. “This, to my knowledge, is the first AI-generated drug in stage II clinical trials. A true milestone for the community and for Insilico.”

The drug’s success still isn’t a given. Drug candidates often fail during clinical trials. But if successful, it could potentially have a wider reach. Fibrosis readily occurs in multiple organs as we age, eventually grinding normal organ functions to a halt.

“We wanted to identify a target that was highly implicated in both disease and aging, and fibrosis…is a major hallmark of aging,” said Zhavoronkov. The AI platform found one of the most promising “dual-purpose targets related to anti-fibrosis and aging,” which may not only save lives in people with idiopathic pulmonary fibrosis but also potentially slow aging for us all.

To Dr. Christoph Kuppe at the RWTH Aachen who was not involved in the work, the study is a “landmark” that could reshape the trajectory of drug discovery.

With ISM018_055 currently undergoing Phase II trials, Zhavoronkov is envisioning a future where AI and scientists collaborate to speed up new treatments. “We hope this [work] will drive more confidence, and more partnerships, and serve to convince any remaining skeptics of the value of AI-driven drug discovery,” he said.

Image Credit: Insilico

This Week’s Awesome Tech Stories From Around the Web (Through March 16)

ARTIFICIAL INTELLIGENCE

Cognition Emerges From Stealth to Launch AI Software Engineer Devin
Shubham Sharma | VentureBeat
“The human user simply types a natural language prompt into Devin’s chatbot style interface, and the AI software engineer takes it from there, developing a detailed, step-by-step plan to tackle the problem. It then begins the project using its developer tools, just like how a human would use them, writing its own code, fixing issues, testing and reporting on its progress in real-time, allowing the user to keep an eye on everything as it works.”

ROBOTICS

Covariant Announces a Universal AI Platform for Robots
Evan Ackerman | IEEE Spectrum
“[On Monday, Covariant announced] RFM-1, which the company describes as a robotics foundation model that gives robots the ‘human-like ability to reason.’ That’s from the press release, and while I wouldn’t necessarily read too much into ‘human-like’ or ‘reason,’ what Covariant has going on here is pretty cool. …’Our existing system is already good enough to do very fast, very variable pick and place,’ says Covariant co-founder Pieter Abbeel. ‘But we’re now taking it quite a bit further. Any task, any embodiment—that’s the long-term vision. Robotics foundation models powering billions of robots across the world.'”

COMPUTING

Cerebras Unveils Its Next Waferscale AI Chip
Samuel K. Moore | IEEE Spectrum
“Cerebras says its next generation of waferscale AI chips can do double the performance of the previous generation while consuming the same amount of power. The Wafer Scale Engine 3 (WSE-3) contains 4 trillion transistors, a more than 50 percent increase over the previous generation thanks to the use of newer chipmaking technology. The company says it will use the WSE-3 in a new generation of AI computers, which are now being installed in a datacenter in Dallas to form a supercomputer capable of 8 exaflops (8 billion billion floating point operations per second).”

SPACE

SpaceX Celebrates Major Progress on the Third Flight of Starship
Stephen Clarke | Ars Technica
“SpaceX’s new-generation Starship rocket, the most powerful and largest launcher ever built, flew halfway around the world following liftoff from South Texas on Thursday, accomplishing a key demonstration of its ability to carry heavyweight payloads into low-Earth orbit. The successful launch builds on two Starship test flights last year that achieved some, but not all, of their objectives and appears to put the privately funded rocket program on course to begin launching satellites, allowing SpaceX to ramp up the already-blistering pace of Starlink deployments.”

AUTOMATION

This Self-Driving Startup Is Using Generative AI to Predict Traffic
James O’Donnell | MIT Technology Review
“The new system, called Copilot4D, was trained on troves of data from lidar sensors, which use light to sense how far away objects are. If you prompt the model with a situation, like a driver recklessly merging onto a highway at high speed, it predicts how the surrounding vehicles will move, then generates a lidar representation of 5 to 10 seconds into the future (showing a pileup, perhaps).”

TRANSPORTATION

Electric Cars Are Still Not Good Enough
Andrew Moseman | The Atlantic
“The next phase, when electric cars leap from early adoption to mass adoption, depends on the people [David] Rapson calls ‘the pragmatists’: Americans who will buy whichever car they deem best and who are waiting for their worries about price, range, and charging to be allayed before they go electric. The current slate of EVs isn’t winning them over.”

SPACE

Mining Helium-3 on the Moon Has Been Talked About Forever—Now a Company Will Try
Eric Berger | Ars Technica
“Two of Blue Origin’s earliest employees, former President Rob Meyerson and Chief Architect Gary Lai, have started a company that seeks to extract helium-3 from the lunar surface, return it to Earth, and sell it for applications here. …The present lunar rush is rather like a California gold rush without the gold. By harvesting helium-3, which is rare and limited in supply on Earth, Interlune could help change that calculus by deriving value from resources on the moon. But many questions about the approach remain.”

ARTIFICIAL INTELLIGENCE

What Happens When ChatGPT Tries to Solve 50,000 Trolley Problems?
Fintan Burke | Ars Technica
“Autonomous driving startups are now experimenting with AI chatbot assistants, including one self-driving system that will use one to explain its driving decisions. Beyond announcing red lights and turn signals, the large language models (LLMs) powering these chatbots may ultimately need to make moral decisions, like prioritizing passengers’ or pedestrian’s safety. But is the tech ready? Kazuhiro Takemoto, a researcher at the Kyushu Institute of Technology in Japan, wanted to check if chatbots could make the same moral decisions when driving as humans.”

FUTURE OF FOOD

States Are Lining Up to Outlaw Lab-Grown Meat
Matt Reynolds | Wired
“As well as the Florida bill, there is also proposed legislation to ban cultivated meat in Alabama, Arizona, Kentucky, and Tennessee. If all of those bills pass—an admittedly unlikely prospect—then some 46 million Americans will be cut off from accessing a form of meat that many hope will be significantly kinder to the planet and animals.”

COMPUTING

Physicists Finally Find a Problem Only Quantum Computers Can Do
Lakshmi Chandrasekaran | Quanta
“Quantum computers are poised to become computational superpowers, but researchers have long sought a viable problem that confers a quantum advantage—something only a quantum computer can solve. Only then, they argue, will the technology finally be seen as essential. They’ve been looking for decades. …Now, a team of physicists including [John] Preskill may have found the best candidate yet for quantum advantage.”

Image Credit: SpaceX

This Gene Increases the Risk of Alzheimer’s. Scientists Finally Know Why

0

At the turn of the 20th century, Dr. Alois Alzheimer noticed peculiar changes in a freshly removed brain. The brain had belonged to a 50-year-old woman who gradually lost her memory and struggled with sleep, increased aggression, and eventually paranoia.

Under the microscope, her brain was littered with tangles of protein clumps. Curiously, shiny bubbles of fat had also accumulated inside brain cells, but they weren’t neurons—the brain cells that spark with electricity and underlie our thoughts and memories. Instead, the fatty pouches built up in supporting brain cells called glia.

Scientists have long thought toxic protein clusters lead to or exacerbate Alzheimer’s disease. Decades of work aimed at breaking down these clumps has mostly failed—earning the endeavor the nickname “graveyard of dreams.” There has been a recent win. In early 2023, the US Food and Drug Administration approved an Alzheimer’s drug that slightly slowed cognitive decline by inhibiting protein clumps, although amid much controversy over its safety.

A growing number of experts are exploring other ways to battle the mind-eating disorder. Stanford’s Dr. Tony Wyss-Coray thinks an answer may come from the original source; Alois Alzheimer’s first descriptions of fatty bubbles inside glia cells—but with a modern genetic twist.

In a new study, the team targeted fatty bubbles as a potential driver of Alzheimer’s disease. Using donated brain tissue from people with the disorder, they pinpointed one cell type that’s especially vulnerable to the fatty deposits—microglia, the brain’s main immune cells.

Not all people with Alzheimer’s had overly fatty microglia. Those who did harbored a specific variant of a gene, called APOE4. Scientists have long known that APOE4 increases the risk of Alzheimer’s, but the reason why has remained a mystery.

The fatty bubbles may be the answer. Lab-made microglia cells from people with APOE4 rapidly accumulated bubbles and spewed them onto neighboring cells. When treated with liquids containing the bubbles, healthy neurons developed classical signs of Alzheimer’s disease.

The results uncover a new link between genetic risk factors for Alzheimer’s and fatty bubbles in the brain’s immune cells, the team wrote in their paper.

“This opens up a new avenue for therapeutic development,” the University of Pennsylvania’s Dr. Michal Haney, who was not involved in the study, told New Scientist.

The Forgetting Gene

Two types of proteins have been at the heart of Alzheimer’s research.

One is beta-amyloid. These proteins start as wispy strands, but gradually they grasp each other and form large clumps that gunk up the outside of neurons. Another culprit is tau. Normally innocuous, tau eventually forms tangles inside neurons that can’t be easily broken down.

Together, the proteins inhibit normal neuron functions. Dissolving or blocking these clumps should, in theory, restore neuronal health, but most treatments have shown minimal or no improvement to memory or cognition in clinical trials.

Meanwhile, genome-wide studies have found a gene called APOE is a genetic regulator of the disease. It comes in multiple variants: APOE2 is protective, whereas APOE4 increases disease risk up to 12-fold—earning its nickname the “forgetting gene.” Studies are underway to genetically deliver protective variants that wipe out the negative consequences of APOE4. Researchers hope this approach can halt memory or cognitive deficits before they occur.

But why are some APOE variants protective, while others are not? Fatty bubbles may be to blame.

Cellular Gastronomy

Most cells contain little bubbles of fat. Dubbed “lipid droplets,” they’re an essential energy source. The bubbles interact with other cellular components to control a cell’s metabolism.

Each bubble has a core of intricately arranged fats surrounded by a flexible molecular “cling wrap.” Lipid droplets can rapidly grow or shrink in size to buffer toxic levels of fatty molecules in the cell and direct immune responses against infections in the brain.

APOE is a major gene regulating these lipid droplets. The new study asked if fatty deposits are the reason APOE4 increases the risk of Alzheimer’s disease.

The team first mapped all proteins in different types of cells in brain tissues donated from people with Alzheimer’s. Some had the dangerous APOE4 variant; others had APOE3, which doesn’t increase disease risk. In all, the team analyzed roughly 100,000 cells—including neurons and myriad other brain cell types, such as the immune cell microglia.

Comparing results from the two genetic variants, the team found a stark difference. People with APOE4 had far higher levels of an enzyme that generates lipid droplets, but only in microglia. The droplets collected around the nucleus—which houses our genetic material—similar to Alois Alzheimer’s first description of fatty deposits.

The lipid droplets also increased the levels of dangerous proteins in Alzheimer’s disease, including amyloid and tau. In a standard cognitive test in mice, more lipid droplets correlated to worse performance. Like humans, mice with the APOE4 variant had far more fatty microglia than those with the “neutral” APOE3, and the immune cells had higher levels of inflammation.

Although the droplets accumulated inside microglia, they also readily harmed nearby neurons.

In a test, the team transformed skin cells from people with APOE4 into a stem cell-like state. With a specific dose of chemicals, they nudged the cells to develop into neurons with the APOE4 genotype.

They then gathered secretions from microglia with either high or low levels of lipid droplets and treated the engineered neurons with the liquids. Secretions with low levels of fatty bubbles didn’t harm the cells. But neurons given doses high in lipid droplets rapidly changed tau—a classic Alzheimer’s protein—into its disease-causing form. Eventually, these neurons died off.

This isn’t the first time fatty bubbles have been linked to Alzheimer’s disease, but we now have a clearer understanding of why. Lipid droplets accumulate in microglia with APOE4, transforming these cells into an inflammatory state that harms nearby neurons—potentially leading to their death. The study adds to recent work highlighting irregular immune responses in the brain as a major driver of Alzheimer’s and other neurodegenerative diseases.

It’s yet unclear whether lowering lipid droplet levels can relieve Alzheimer’s symptoms in people with APOE4, but the team is eager to try.

One route is to genetically inhibit the enzyme that creates the lipid droplets in APOE4 microglia. Another option is to use drugs to activate the cell’s built-in disposal system—basically, a bubble full of acid—to break down the fatty bubbles. It’s a well-known strategy that’s previously been used to destroy toxic protein clumps, but it could be reworked to clear out lipid droplets.

“Our findings suggest a link between genetic risk factors for Alzheimer’s disease with microglial lipid droplet accumulation…potentially providing therapeutic strategies for Alzheimer’s disease,” wrote the team in their paper.

As a next step, they’re exploring whether the protective APOE2 variant can thwart lipid droplet accumulation in microglia, and perhaps, eventually save the brain’s memory and cognition.

Image Credit: Richard Watts, PhD, University of Vermont and Fair Neuroimaging Lab, Oregon Health and Science University

Watch an AI Robot Dog Rock an Agility Course It’s Never Seen Before

0

Robots doing feats of acrobatics might be a great marketing trick, but typically these displays are highly choreographed and painstakingly programmed. Now researchers have trained a four-legged AI robot to tackle complex, previously unseen obstacle courses in real-world conditions.

Creating agile robots is challenging due to the inherent complexity of the real world, the limited amount of data robots can collect about it, and the speed at which decisions need to be made to carry out dynamic movements.

Companies like Boston Dynamics have regularly released videos of their robots doing everything from parkour to dance routines. But as impressive as these feats are, they typically involve humans painstakingly programming every step or training on the same highly controlled environments over and over.

This process seriously limits the ability to transfer skills to the real world. But now, researchers from ETH Zurich in Switzerland have used machine learning to teach their robot dog ANYmal a suite of basic locomotive skills that it can then string together to tackle a wide variety of challenging obstacle courses, both indoors and outdoors, at speeds of up to 4.5 miles per hour.

“The proposed approach allows the robot to move with unprecedented agility,” write the authors of a new paper on the research in Science Robotics. “It can now evolve in complex scenes where it must climb and jump on large obstacles while selecting a non-trivial path toward its target location.”

To create a flexible yet capable system, the researchers broke the problem down into three parts and assigned a neural network to each. First, they created a perception module that takes input from cameras and lidar and uses them to build a picture of the terrain and any obstacles in it.

They combined this with a locomotion module that had learned a catalog of skills designed to help it traverse different kinds of obstacles, including jumping, climbing up, climbing down, and crouching. Finally, they merged these modules with a navigation module that could chart a course through a series of obstacles and decide which skills to invoke to clear them.

“We replace the standard software of most robots with neural networks,” Nikita Rudin, one of the paper’s authors, an engineer at Nvidia, and a PhD student at ETH Zurich, told New Scientist. “This allows the robot to achieve behaviors that were not possible otherwise.”

One of the most impressive aspects of the research is the fact the robot was trained in simulation. A major bottleneck in robotics is gathering enough real-world data for robots to learn from. Simulations can help gather data much more quickly by putting many virtual robots through trials in parallel and at much greater speed than is possible with physical robots.

But translating skills learned in simulation to the real world is tricky due to the inevitable gap between simple virtual worlds and the hugely complex physical world. Training a robotic system that can operate autonomously in unseen environments both indoors and outdoors is a major achievement.

The training process relied purely on reinforcement learning—effectively trial and error—rather than human demonstrations, which allowed the researchers to train the AI model on a very large number of randomized scenarios rather than having to label each manually.

Another impressive feature is that everything runs on chips installed in the robot, rather than relying on external computers. And as well as being able to tackle a variety of different scenarios, the researchers showed ANYmal could recover from falls or slips to complete the obstacle course.

The researchers say the system’s speed and adaptability suggest robots trained in this way could one day be used for search and rescue missions in unpredictable, hard-to-navigate environments like rubble and collapsed buildings.

The approach does have limitations though. The system was trained to deal with specific kinds of obstacles, even if they varied in size and configuration. Getting it to work in more unstructured environments would require much more training in more diverse scenarios to develop a broader palette of skills. And that training is both complicated and time-consuming.

But the research is nonetheless an indication that robots are becoming increasingly capable of operating in complex, real-world environments. That suggests they could soon be a much more visible presence all around us.

Image Credit: ETH Zurich

What Is a GPU? The Chips Powering the AI Boom, and Why They’re Worth Trillions

0

As the world rushes to make use of the latest wave of AI technologies, one piece of high-tech hardware has become a surprisingly hot commodity: the graphics processing unit, or GPU.

A top-of-the-line GPU can sell for tens of thousands of dollars, and leading manufacturer Nvidia has seen its market valuation soar past $2 trillion as demand for its products surges.

GPUs aren’t just high-end AI products, either. There are less powerful GPUs in phones, laptops, and gaming consoles, too.

By now you’re probably wondering: What is a GPU, really? And what makes them so special?

What Is a GPU?

GPUs were originally designed primarily to quickly generate and display complex 3D scenes and objects, such as those involved in video games and computer-aided design software. Modern GPUs also handle tasks such as decompressing video streams.

The “brain” of most computers is a chip called a central processing unit (CPU). CPUs can be used to generate graphical scenes and decompress videos, but they are typically far slower and less efficient at these tasks compared to GPUs. CPUs are better suited for general computation tasks, such as word processing and browsing web pages.

How Are GPUs Different From CPUs?

A typical modern CPU is made up of between 8 and 16 “cores,” each of which can process complex tasks in a sequential manner.

GPUs, on the other hand, have thousands of relatively small cores, which are designed to all work at the same time (“in parallel”) to achieve fast overall processing. This makes them well-suited for tasks that require a large number of simple operations which can be done at the same time, rather than one after another.

Traditional GPUs come in two main flavors.

First, there are standalone chips, which often come in add-on cards for large desktop computers. Second are GPUs combined with a CPU in the same chip package, which are often found in laptops and game consoles such as the PlayStation 5. In both cases, the CPU controls what the GPU does.

Why Are GPUs So Useful for AI?

It turns out GPUs can be repurposed to do more than generate graphical scenes.

Many of the machine learning techniques behind artificial intelligence, such as deep neural networks, rely heavily on various forms of matrix multiplication.

This is a mathematical operation where very large sets of numbers are multiplied and summed together. These operations are well-suited to parallel processing and hence can be performed very quickly by GPUs.

What’s Next for GPUs?

The number-crunching prowess of GPUs is steadily increasing due to the rise in the number of cores and their operating speeds. These improvements are primarily driven by improvements in chip manufacturing by companies such as TSMC in Taiwan.

The size of individual transistors—the basic components of any computer chip—is decreasing, allowing more transistors to be placed in the same amount of physical space.

However, that is not the entire story. While traditional GPUs are useful for AI-related computation tasks, they are not optimal.

Just as GPUs were originally designed to accelerate computers by providing specialized processing for graphics, there are accelerators that are designed to speed up machine learning tasks. These accelerators are often referred to as data center GPUs.

Some of the most popular accelerators, made by companies such as AMD and Nvidia, started out as traditional GPUs. Over time, their designs evolved to better handle various machine learning tasks, for example by supporting the more efficient “brain float” number format.

Other accelerators, such as Google’s tensor processing units and Tenstorrent’s Tensix cores, were designed from the ground up to speed up deep neural networks.

Data center GPUs and other AI accelerators typically come with significantly more memory than traditional GPU add-on cards, which is crucial for training large AI models. The larger the AI model, the more capable and accurate it is.

To further speed up training and handle even larger AI models, such as ChatGPT, many data center GPUs can be pooled together to form a supercomputer. This requires more complex software to properly harness the available number crunching power. Another approach is to create a single very large accelerator, such as the “wafer-scale processor” produced by Cerebras.

Are Specialized Chips the Future?

CPUs have not been standing still either. Recent CPUs from AMD and Intel have built-in low-level instructions that speed up the number-crunching required by deep neural networks. This additional functionality mainly helps with “inference” tasks—that is, using AI models that have already been developed elsewhere.

To train the AI models in the first place, large GPU-like accelerators are still needed.

It is possible to create ever more specialized accelerators for specific machine learning algorithms. Recently, for example, a company called Groq has produced a “language processing unit” (LPU) specifically designed for running large language models along the lines of ChatGPT.

However, creating these specialized processors takes considerable engineering resources. History shows the usage and popularity of any given machine learning algorithm tends to peak and then wane—so expensive specialized hardware may become quickly outdated.

For the average consumer, however, that’s unlikely to be a problem. The GPUs and other chips in the products you use are likely to keep quietly getting faster.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Nvidia

Colossal Creates Elephant Stem Cells for the First Time in Quest to Revive the Woolly Mammoth

0

The last woolly mammoth roamed the vast arctic tundra 4,000 years ago. Their genes still live on in a majestic animal today—the Asian elephant.

With 99.6 percent similarity in their genetic makeup, Asian elephants are the perfect starting point for a bold plan to bring the mammoth—or something close to it—back from extinction. The project, launched by biotechnology company Colossal in 2021, raised eyebrows for its moonshot goal.

The overall playbook sounds straightforward.

The first step is to sequence and compare the genomes of mammoth and elephant. Next, scientists will identify the genes behind the physical traits—long hair, fatty deposits—that allowed mammoths to thrive in freezing temperatures and then insert them into elephant cells using gene editing. Finally, the team will transfer the nucleus—which houses DNA—from the edited cells into an elephant egg and implant the embryo into a surrogate.

The problem? Asian elephants are endangered, and their cells—especially eggs—are hard to come by.

Last week, the company reported a major workaround. For the first time, they transformed elephant skin cells into stem cells, each with the potential to become any cell or tissue in the body.

The advance makes it easier to validate gene editing results in the lab before committing to a potential pregnancy—which lasts up to 22 months for elephants. Scientists could, for example, coax the engineered elephant stem cells to become hair cells and test for gene edits that give the mammoth its iconic thick, warm coat.

These induced pluripotent stem cells, or iPSCs, have been especially hard to make from elephant cells. The animals “are a very special species and we have only just begun to scratch the surface of their fundamental biology,” said Dr. Eriona Hysolli, who heads up biosciences at Colossal, in a press release.

Because the approach only needs a skin sample from an Asian elephant, it goes a long way to protecting the endangered species. The technology could also support conservation for living elephants by providing breeding programs with artificial eggs made from skin cells.

“Elephants might get the ‘hardest to reprogram’ prize,” said Dr. George Church, a Harvard geneticist and Colossal cofounder, “but learning how to do it anyway will help many other studies, especially on endangered species.”

Turn Back the Clock

Nearly two decades ago, Japanese biologist Dr. Shinya Yamanaka revolutionized biology by restoring mature cells to a stem cell-like state.

First demonstrated in mice, the Nobel Prize-winning technique requires only four proteins, together called the Yamanaka factors. The reprogrammed cells, often derived from skin cells, can develop into a range of tissues with further chemical guidance.

Induced pluripotent stem cells (iPSCs), as they’re called, have transformed biology. They’re critical to the process of building brain organoids—miniature balls of neurons that spark with activity—and can be coaxed into egg cells or models of early human embryos.

The technology is well-established for mice and humans. Not so for elephants. “In the past, a multitude of attempts to generate elephant iPSCs have not been fruitful,” said Hysolli.

Most elephant cells died when treated with the standard recipe. Others turned into “zombie” senescent cells—living but unable to perform their usual biological functions—or had little change from their original identity.

Further sleuthing found the culprit: A protein called TP53. Known for its ability to fight off cancer, the protein is often dubbed the genetic gatekeeper. When the gene for TP53 is turned on, the protein urges pre-cancerous cells to self-destruct without harming their neighbors.

Unfortunately, TP53 also hinders iPSC reprogramming. Some of the Yamanaka factors mimic the first stages of cancer growth which could cause edited cells to self-destruct. Elephants have a hefty 29 copies of the “protector” gene. Together, they could easily squash cells with mutated DNA, including those that have had their genes edited.

“We knew p53 was going to be a big deal,” Church told the New York Times.

To get around the gatekeeper, the team devised a chemical cocktail to inhibit TP53 production. With a subsequent dose of the reprogramming factors, they were able to make the first elephant iPSCs out of skin cells.

A series of tests showed the transformed cells looked and behaved as expected. They had genes and protein markers often seen in stem cells. When allowed to further develop into a cluster of cells, they formed a three-layered structure critical for early embryo development.

“We’ve been really waiting for these things desperately,” Church told Nature. The team published their results, which have not yet been peer-reviewed, on the preprint server bioRxiv.

Long Road Ahead

The company’s current playbook for bringing back the mammoth relies on cloning technologies, not iPSCs.

But the cells are valuable as proxies for elephant egg cells or even embryos, allowing the scientists to continue their work without harming endangered animals.

They may, for example, transform the new stem cells into egg or sperm cells—a feat so far only achieved in mice—for further genetic editing. Another idea is to directly transform them into embryo-like structures equipped with mammoth genes.

The company is also looking into developing artificial wombs to help nurture any edited embryos and potentially bring them to term. In 2017, an artificial womb gave birth to a healthy lamb, and artificial wombs are now moving towards human trials. These systems would lessen the need for elephant surrogates and avoid putting their natural reproductive cycles at risk.

As the study is a preprint, its results haven’t yet been vetted by other experts in the field. Many questions remain. For example, do the reprogrammed cells maintain their stem cell status? Can they be transformed into multiple tissue types on demand?

Reviving the mammoth is Colossal’s ultimate goal. But Dr. Vincent Lynch at the University of Buffalo, who has long tried to make iPSCs from elephants, thinks the results could have a broader reach.

Elephants are remarkably resistant to cancer. No one knows why. Because the study’s iPSCs are stripped of TP53, a cancer-protective gene, they could help scientists identify the genetic code that allows elephants to fight tumors and potentially inspire new treatments for us as well.

Next, the team hopes to recreate mammoth traits—such as long hair and fatty deposits—in cell and animal models made from gene-edited elephant cells. If all goes well, they’ll employ a technique like the one used to clone Dolly the sheep to birth the first calves.

Whether these animals can be called mammoths is still up for debate. Their genome won’t exactly match the extinct species. Further, animal biology and behavior strongly depend on interactions with the environment. Our climate has changed dramatically since mammoths went extinct 4,000 years ago. The Arctic tundra—their old home—is rapidly melting. Can the resurrected animals adjust to an environment they weren’t adapted to roam?

Animals also learn from each other. Without a living mammoth to show a calf how to be a mammoth in its natural habitat, it may adopt a completely different set of behaviors.

Colossal has a general plan to tackle these difficult questions. In the meantime, the work will help the project make headway without putting elephants at risk, according to Church.

“This is a momentous step,” said Ben Lamm, cofounder and CEO of Colossal. “Each step brings us closer to our long-term goals of bringing back this iconic species.”

Image Credit: Colossal Biosciences

Russia and China Want to Build a Nuclear Power Plant on the Moon

0

Supporting any future settlement on the moon would require considerable amounts of energy. Russia and China think a nuclear power plant is the best option, and they have plans to build one by the mid-2030s.

Lunar exploration is back in fashion these days, with a host of national space agencies as well as private companies launching missions to our nearest astronomical neighbor and announcing plans to build everything from human settlements to water mining operations and telescopes on its surface.

These ambitious plans face a major challenge though—how to power all this equipment. The go-to energy source in space is solar power, but lunar nights last 14 days, so unless we want to haul huge numbers of batteries along for the ride, it won’t suffice for more permanent installations.

That’s why Russia and China are currently working on a plan to develop a nuclear power plant that could support the pair’s ambitious joint exploration program, Yuri Borisov, the head of Russia’s space agency Roscosmos said during a recent public event.

“Today we are seriously considering a project—somewhere at the turn of 2033-2035—to deliver and install a power unit on the lunar surface together with our Chinese colleagues,” he said, according to Reuters.

Borisov provided few details other than saying that one of Russia’s main contributions to the countries’ lunar plans was its expertise in “nuclear space energy.” He added that they were also developing a nuclear-powered spaceship designed to ferry cargo around in orbit.

“We are indeed working on a space tugboat,” he said. “This huge, cyclopean structure that would be able, thanks to a nuclear reactor and high-power turbines…to transport large cargoes from one orbit to another, collect space debris, and engage in many other applications.”

Whether these plans will ever come to fruition remains unclear though, considering the increasingly dilapidated state of Russia’s space industry. Last year, the country’s Luna-25 mission, its first attempt to revisit the moon in decades, smashed into the lunar surface after experiencing problems in orbit.

Russia and China are supposed to be working together to build the so-called International Lunar Research Station at the moon’s south pole, with each country sending half a dozen spacecraft to complete the facility. But in a recent presentation on the project by senior Chinese space scientists there was no mention of Russia’s missions, according to the South China Morning Post.

The idea of launching nuclear material into space may sound like an outlandish plan, but Russia and China are far from alone. In 2022, NASA awarded companies three $5 million contracts to investigate the feasibility of a small nuclear reactor that could support the agency’s moon missions. In January, it announced it was extending the contracts, targeting a working reactor ready for launch by the early 2030s.

“The lunar night is challenging from a technical perspective, so having a source of power such as this nuclear reactor, which operates independent of the sun, is an enabling option for long-term exploration and science efforts on the moon,” NASA’s Trudy Kortes said in a statement.

NASA has given the companies plenty of leeway to design their reactors, as long as they weigh under six metric tons and can produce 40 kilowatts of electricity, enough to power 33 homes back on Earth. Crucially, they must be able to run for a decade without any human intervention.

The UK Space Agency has also given engineering giant Rolls-Royce £2.9 million ($3.7 million) to research how nuclear power could help future manned moon bases. The company unveiled a concept model of a micro nuclear reactor at the UK Space Conference last November and says it hopes to have a working version ready to send to the moon by the early 2030s.

While nuclear power’s environmental impacts and high costs are causing its popularity to fade back on Earth, it seems like it may have a promising future further out in the solar system.

Image Credit: LRO recreation of Apollo 8 Earthrise / NASA

This Week’s Awesome Tech Stories From Around the Web (Through March 9)

TECH

These Companies Have a Plan to Kill Apps
Julian Chokkattu | Wired
“Everyone wants to kill the app. There’s a wave of companies building so-called app-less phones and gadgets, leveraging artificial intelligence advancements to create smarter virtual assistants that can handle all kinds of tasks through one portal, bypassing the need for specific apps for a particular function. We might be witnessing the early stages of the first major smartphone evolution since the introduction of the iPhone—or an AI-hype-fueled gimmick.”

ARTIFICIAL INTELLIGENCE

Anthropic Sets a New Gold Standard: Your Move, OpenAI
Maxwell Zeff | Gizmodo
“Claude 3 most notably outperforms ChatGPT and Gemini in coding, one of AI’s most popular early use cases. Claude Opus scores an 85% success rate in zero-shot coding, compared to GPT-4’s 67% and Gemini’s 74%. Claude also outperforms the competition when it comes to reasoning, math problem-solving, and basic knowledge (MMLU). However, [Claude] Sonnet and [Claude] Haiku, which are cheaper and faster, are competitive with OpenAI and Google’s most advanced models as well.”

ARTIFICIAL INTELLIGENCE

Why Most AI Benchmarks Tell Us So Little
Kyle Wiggers | TechCrunch
“On Tuesday, startup Anthropic released a family of generative AI models that it claims achieve best-in-class performance. …But what metrics are they talking about? When a vendor says a model achieves state-of-the-art performance or quality, what’s that mean, exactly? Perhaps more to the point: Will a model that technically ‘performs’ better than some other model actually feel improved in a tangible way? On that last question, not likely.”

FUTURE OF WORK

AI Prompt Engineering Is Dead
Dina Genkina | IEEE Spectrum
“‘Every business is trying to use it for virtually every use case that they can imagine,’ [Austin] Henley says. To do so, they’ve enlisted the help of prompt engineers professionally. However, new research suggests that prompt engineering is best done by the model itself, and not by a human engineer. This has cast doubt on prompt engineering’s future—and increased suspicions that a fair portion of prompt-engineering jobs may be a passing fad, at least as the field is currently imagined.”

COMPUTING

D-Wave Says Its Quantum Computers Can Solve Otherwise Impossible Tasks
Matthew Sparkes | New Scientist
“Quantum computing firm D-Wave says its machines are the first to achieve ‘computational supremacy’ by solving a practically useful problem that would otherwise take millions of years on an ordinary supercomputer. …However, outside observers are more cautious.”

TRANSPORTATION

California Gives Waymo the Green Light to Expand Robotaxi Operations
Wes Davis | The Verge
“Waymo is now allowed to operate its self-driving robotaxis on highways in parts of Los Angeles and in the Bay Area following a California regulator’s approval of its expansion plans on Friday. This means the company’s cars will now be allowed to drive at up to 65mph on local roads and highways in approved areas.”

SPACE

Voyager 1, First Craft in Interstellar Space, May Have Gone Dark
Orlando Mayorquin | The New York Times
“Voyager 1 discovered active volcanoes, moons and planetary rings, proving along the way that Earth and all of humanity could be squished into a single pixel in a photograph, a ‘pale blue dot,’ as the astronomer Carl Sagan called it. It stretched a four-year mission into the present day, embarking on the deepest journey ever into space. Now, it may have bid its final farewell to that faraway dot.”

ENVIRONMENT

Pulling Gold Out of E-Waste Suddenly Becomes Super-Profitable
Paul McClure | New Atlas
“A new method for recovering high-purity gold from discarded electronics is paying back $50 for every dollar spent, according to researchers—who found the key gold-filtering substance in cheesemaking, of all places. …’The fact I love the most is that we’re using a food industry byproduct to obtain gold from electronic waste,’ said Raffaele Mezzenga, the study’s corresponding author. ‘You can’t get more sustainable than that!'”

ETHICS

5 Years After San Francisco Banned Face Recognition, Voters Ask for More Surveillance
Lauren Goode and Tom Simonite | Wired
“San Francisco made history in 2019 when its Board of Supervisors voted to ban city agencies including the police department from using face recognition. About two dozen other US cities have since followed suit. But on Tuesday, San Francisco voters appeared to turn against the idea of restricting police technology, backing a ballot proposition that will make it easier for city police to deploy drones and other surveillance tools.”

DIGITAL MEDIA

Researchers Tested Leading AI Models for Copyright Infringement Using Popular Books, and GPT-4 Performed Worst
Hayden Field | CNBC
“The four models it tested were OpenAI’s GPT-4, Anthropic’s Claude 2, Meta’s Llama 2 and Mistral AI’s Mixtral. ‘We pretty much found copyrighted content across the board, across all models that we evaluated, whether it’s open source or closed source,’ Rebecca Qian, Patronus AI’s cofounder and CTO, who previously worked on responsible AI research at Meta, told CNBC in an interview.”

SPACE

SpaceX Just Showed Us What Every Day Could Be Like in Spaceflight
Stephen Clark | Ars Technica
“Between Sunday night and Monday night, SpaceX teams in Texas, Florida, and California supervised three Falcon 9 rocket launches and completed a full dress rehearsal ahead of the next flight of the company’s giant Starship launch vehicle. This was a remarkable sequence of events, even for SpaceX, which has launched a mission at an average rate of once every three days since the start of the year. We’ve reported on this before, but it’s worth reinforcing that no launch provider, commercial or government, has ever operated at this cadence.”

AUTOMATION

AI Losing Its Grip on Fast Food Drive-Thru Lanes
Angela L. Pagán | The Takeout
“Presto’s technology does use AI voice recognition to take down orders in the drive-thru lane, but a significant portion of the process still requires an actual employee’s involvement as well. The bot takes down the order from the customer, but it is still the responsibility of the employees to input the order and ensure its accuracy. The voice assistant technology has gone through multiple iterations, but even its most advanced version is still only completing 30% of orders without the help of a human being.”

Image Credit: Pawel Czerwinski / Unsplash

This AI Can Design the Machinery of Life With Atomic Precision

0

Proteins are social creatures. They’re also chameleons. Depending on a cell’s needs, they rapidly transform in structure and grab onto other biomolecules in an intricate dance.

It’s not molecular dinner theater. Rather, these partnerships are the heart of biological processes. Some turn genes on or off. Others nudge aging “zombie” cells to self-destruct or keep our cognition and memory in tip-top shape by reshaping brain networks.

These connections have already inspired a wide range of therapies—and new therapies could be accelerated by AI that can model and design biomolecules. But previous AI tools solely focused on proteins and their interactions, casting their non-protein partners aside.

This week, a study in Science expanded AI’s ability to model a wide variety of other biomolecules that physically grab onto proteins, including the iron-containing small molecules that form the center of oxygen carriers.

Led by Dr. David Baker at the University of Washington, the new AI broadens the scope of biomolecular design. Dubbed RoseTTAFold All-Atom, it builds upon a previous protein-only system to incorporate a myriad of other biomolecules, such as DNA and RNA. It also adds small molecules—for example, iron—that are integral to certain protein functions.

The AI learned only from the sequence and structure of the components—without any idea of their 3D structure—but can map out complex molecular machines at the atomic level.

In the study, when paired with generative AI, RoseTTAFold All-Atom created proteins that easily grabbed onto a heart disease medication. The algorithm also generated proteins that regulate heme, an iron-rich molecule that helps blood carry oxygen, and bilin, a chemical in plants and bacteria that absorbs light for their metabolism.

These examples are just proofs of concept. The team is releasing RoseTTAFold All-Atom to the public for scientists so they can create multiple interacting bio-components with far more complexity than protein complexes alone. In turn, the creations could lead to new therapies.

“Our goal here was to build an AI tool that could generate more sophisticated therapies and other useful molecules,” said study author Woody Ahern in a press release.

Dream On

In 2020, Google DeepMind’s AlphaFold and Baker Lab’s RoseTTAFold solved the protein structure prediction problem that had baffled scientists for half a century and ushered in a new era of protein research. Updated versions of these algorithms mapped all protein structures both known and unknown to science.

Next, generative AI—the technology behind OpenAI’s ChatGPT and Google’s Gemini—sparked a creative frenzy of designer proteins with an impressive range of activity. Some newly generated proteins regulated a hormone that kept calcium levels in check. Others led to artificial enzymes or proteins that could readily change their shape like transistors in electronic circuits.

By hallucinating a new world of protein structures, generative AI has the potential to dream up a generation of synthetic proteins to regulate our biology and health.

But there’s a problem. Designer protein AI models have tunnel vision: They are too focused on proteins.

When envisioning life’s molecular components, proteins, DNA, and fatty acids come to mind. But inside a cell, these structures are often held together by small molecules that mesh with surrounding components, together forming a functional bio-assembly.

One example is heme, a ring-like molecule that incorporates iron. Heme is the basis of hemoglobin in red blood cells, which shuttles oxygen throughout the body and grabs onto surrounding protein “hooks” using a variety of chemical bonds.

Unlike proteins or DNA, which can be modeled as a string of molecular “letters,” small molecules and their interactions are hard to capture. But they’re critical to biology’s complex molecular machines and can dramatically alter their functions.

Which is why, in their new study, the researchers aimed to broaden AI’s scope beyond proteins.

“We set out to develop a structure prediction method capable of generating 3D coordinates for all atoms” for a biological molecule, including proteins, DNA, and other modifications, the authors wrote in their paper.

Tag Team

The team began by modifying a previous protein modeling AI to incorporate other molecules.

The AI works on three levels: The first analyzes a protein’s one-dimensional “letter” sequence, like words on a page. Next, a 2D map tracks how far each protein “word” is from another. Finally, 3D coordinates—a bit like GPS—map the overall structure of the protein.

Then comes the upgrade. To incorporate small molecule information into the model, the team added data about atomic sites and chemical connections into the first two layers.

In the third, they focused on chirality—that is, if a chemical’s structure is left or right-handed. Like our hands, chemicals can also have mirrored structures with vastly differing biological consequences. Like putting on gloves, only the correct “handedness” of a chemical can fit a given bio-assembly “glove.”

RoseTTAFold All-Atom was then trained on multiple datasets with hundreds of thousands of datapoints describing proteins, small molecules, and their interactions. Eventually, it learned general properties of small molecules useful for building plausible protein assemblies. As a sanity check, the team also added a “confidence gauge” to identify high-quality predictions—those that lead to stable and functional bio-assemblies.

Unlike previous protein-only AI models, RoseTTAFold All-Atom “can model full biomolecular systems,” wrote the team.

In a series of tests, the upgraded model outperformed previous methods when learning to “dock” small molecules onto a given protein—a key component of drug discovery—by rapidly predicting interactions between proteins and non-protein molecules.

Brave New World

Incorporating small molecules opens a whole new level of custom protein design.

As a proof of concept, the team meshed RoseTTAFold All-Atom with a generative AI model they had previously developed and designed protein partners for three different small molecules.

The first was digoxigenin, which is used to treat heart diseases but can have side effects. A protein that grabs onto it reduces toxicity. Even without prior knowledge of the molecule, the AI designed several protein binders that tempered digoxigenin levels when tested in cultured cells.

The AI also designed proteins that bind to heme, a small molecule critical for oxygen transfer in red blood cells, and bilin, which helps a variety of creatures absorb light.

Unlike previous methods, the team explained, the AI can “readily generate novel proteins” that grab onto small molecules without any expert knowledge.

It can also make highly accurate predictions about the strength of connections between proteins and small molecules at the atomic level, making it possible to rationally build a whole new universe of complex biomolecular structures.

“By empowering scientists everywhere to generate biomolecules with unprecedented precision, we’re opening the door to groundbreaking discoveries and practical applications that will shape the future of medicine, materials science, and beyond,” said Baker.

Image Credit: Ian C. Haydon

A Google AI Watched 30,000 Hours of Video Games—Now It Makes Its Own

0

AI continues to generate plenty of light and heat. The best models in text and images—now commanding subscriptions and being woven into consumer products—are competing for inches. OpenAI, Google, and Anthropic are all, more or less, neck and neck.

It’s no surprise then that AI researchers are looking to push generative models into new territory. As AI requires prodigious amounts of data, one way to forecast where things are going next is to look at what data is widely available online, but still largely untapped.

Video, of which there is plenty, is an obvious next step. Indeed, last month, OpenAI previewed a new text-to-video AI called Sora that stunned onlookers.

But what about video…games?

Ask and Receive

It turns out there are quite a few gamer videos online. Google DeepMind says it trained a new AI, Genie, on 30,000 hours of curated video footage showing gamers playing simple platformers—think early Nintendo games—and now it can create examples of its own.

Genie turns a simple image, photo, or sketch into an interactive video game.

Given a prompt, say a drawing of a character and its surroundings, the AI can then take input from a player to move the character through its world. In a blog post, DeepMind showed Genie’s creations navigating 2D landscapes, walking around or jumping between platforms. Like a snake eating its tail, some of these worlds were even sourced from AI-generated images.

In contrast to traditional video games, Genie generates these interactive worlds frame by frame. Given a prompt and command to move, it predicts the most likely next frames and creates them on the fly. It even learned to include a sense of parallax, a common feature in platformers where the foreground moves faster than the background.

Notably, the AI’s training didn’t include labels. Rather, Genie learned to correlate input commands—like, go left, right, or jump—with in-game movements simply by observing examples in its training. That is, when a character in a video moved left, there was no label linking the command to the motion. Genie figured that part out by itself. That means, potentially, future versions could be trained on as much applicable video as there is online.

The AI is an impressive proof of concept, but it’s still very early in development, and DeepMind isn’t planning to make the model public yet.

The games themselves are pixellated worlds streaming by at a plodding one frame per second. By comparison, contemporary video games can hit 60 or 120 frames per second. Also, like all generative algorithms, Genie generates strange or inconsistent visual artifacts. And it’s prone to hallucinating “unrealistic futures,” the team wrote in their paper describing the AI.

That said, there are a few reasons to believe Genie will improve from here.

Whipping Up Worlds

Because the AI can learn from unlabeled online videos and is still a modest size—just 11 billion parameters—there’s ample opportunity to scale up. Bigger models trained on more information tend to improve dramatically. And with a growing industry focused on inference—the process of by which a trained AI performs tasks, like generating images or text—it’s likely to get faster.

DeepMind says Genie could help people, like professional developers, make video games. But like OpenAI—which believes Sora is about more than videos—the team is thinking bigger. The approach could go well beyond video games.

One example: AI that can control robots. The team trained a separate model on video of robotic arms completing various tasks. The model learned to manipulate the robots and handle a variety of objects.

DeepMind also said Genie-generated video game environments could be used to train AI agents. It’s not a new strategy. In a 2021 paper, another DeepMind team outlined a video game called XLand that was populated by AI agents and an AI overlord generating tasks and games to challenge them. The idea that the next big step in AI will require algorithms that can train one another or generate synthetic training data is gaining traction.

All this is the latest salvo in an intense competition between OpenAI and Google to show progress in AI. While others in the field, like Anthropic, are advancing multimodal models akin to GPT-4, Google and OpenAI also seem focused on algorithms that simulate the world. Such algorithms may be better at planning and interaction. Both will be crucial skills for the AI agents the organizations seem intent on producing.

“Genie can be prompted with images it has never seen before, such as real world photographs or sketches, enabling people to interact with their imagined virtual worlds—essentially acting as a foundation world model,” the researchers wrote in the Genie blog post. “We focus on videos of 2D platformer games and robotics but our method is general and should work for any type of domain, and is scalable to ever larger internet datasets.”

Similarly, when OpenAI previewed Sora last month, researchers suggested it might herald something more foundational: a world simulator. That is, both teams seem to view the enormous cache of online video as a way to train AI to generate its own video, yes, but also to more effectively understand and operate out in the world, online or off.

Whether this pays dividends, or is sustainable long term, is an open question. The human brain operates on a light bulb’s worth of power; generative AI uses up whole data centers. But it’s best not to underestimate the forces at play right now—in terms of talent, tech, brains, and cash—aiming to not only improve AI but make it more efficient.

We’ve seen impressive progress in text, images, audio, and all three together. Videos are the next ingredient being thrown in the pot, and they may make for an even more potent brew.

Image Credit: Google DeepMind

CRISPRed Pork May Be Coming to a Supermarket Near You

0

Many of us appreciate a juicy pork chop or a slab of brown sugar ham. Pork is the third most consumed meat in the US, with a buzzing industry to meet demand.

But for over three decades, pig farmers have been plagued by a pesky virus that causes porcine reproductive and respiratory syndrome (PRRS). Also known as blue ear—for its most notable symptom—the virus spreads through the air like SARS-CoV-2, the bug behind Covid-19.

Infected young pigs spike a high fever with persistent coughing and are unable to gain weight. In pregnant sows, the virus often causes miscarriage or the birth of dead or stunted piglets.

According to one estimate, blue ear costs pork producers in North America more than $600 million annually. While a vaccine is available, it’s not always effective at stopping viral spread.

What if pigs couldn’t be infected in the first place?

This month, a team at Genus, a British biotechnology company focused on animal genetics, introduced a new generation of CRISPR-edited pigs completely resistant to the PRRS virus. In early embryos, the team destroyed a protein the virus exploits to attack cells. The edited piglets were completely immune to the virus, even when housed with infected peers.

Here’s the kicker. Rather than using lab-bred pigs, the team edited four genetically diverse lines of commercial pigs bred for consumption. This isn’t just a lab experiment. “It’s actually doing it in the real world,” Dr. Rodolphe Barrangou at North Carolina State University, who was not involved in the work, told Science.

With PRRS virus being a massive headache, there’s high incentive for farmers to breed virus-resistant pigs at a commercial scale. Dr. Raymond Rowland at the University of Illinois, who helped establish the first PRRS-resistant pigs in the lab, said gene editing is a way “to create a more perfect life” for animals and farmers—and ultimately, to benefit consumers too.

“The pig never gets the virus. You don’t need vaccines; you don’t need a diagnostic test. It takes everything off the table,” he told MIT Technology Review.

Genus is seeking approval for widespread distribution from the US Food and Drug Administration (FDA), which it hopes will come by the end of the year.

An Achilles Heel

The push towards marketable CRISPR pork builds on pioneering results from almost a decade ago.

The PRRS virus silently emerged in the late 1980s, and its impact was almost immediate. Like Covid-19, the virus was completely new to science and pigs, resulting in massive die-offs and birth defects. Farmers quickly set up protocols to control its spread. These will likely sound familiar: Farmers began disinfecting everything, showering and changing into clean clothes, and quarantining any potentially infected pigs.

But the virus still slipped through these preventative measures and spread like wildfire. The only solution was to cull infected animals, costing their keepers profit and heartache. Scientists eventually developed multiple vaccines and drugs to control the virus, but these are costly and burdensome and none are completely effective.

In 2016, Dr. Randall Prather at the University of Missouri asked: What if we change the pig itself? With some molecular sleuthing, his team found the entryway for the virus—a protein called CD163 that dots the surface of a type of immune cell in the lung.

Using gene editing tool CRISPR-Cas9, the team tried multiple ways to destroy the protein—inserting genetic letters, deleting some, or swapping out chunks of the gene behind CD163. Eventually they discovered a way to disable it without otherwise harming the pigs.

When challenged with a hefty dose of the PRRS virus—roughly 100,000 infectious viral particles—non-edited pigs developed severe diarrhea and their muscles wasted away, even when given extra dietary supplements. In contrast, CRISPRed pigs showed no signs of infection, and their lungs maintained a healthy, normal structure. They also readily fought off the virus when housed in close quarters with infected peers.

While promising, the results were a laboratory proof of concept. Genus has now translated this work into the real world.

Trotting On

The team started with four genetic lines of pigs used in the commercial production of pork. Veterinarians carefully extracted eggs from females under anesthesia and fertilized them in an on-site in vitro fertilization (IVF) lab. They added CRISPR into the mix at the same time, with the goal of precisely snipping out a part of CD163 that directly interacts with the virus.

Two days later, the edited embryos were implanted into surrogates that gave birth to healthy gene-edited offspring. Not all piglets had the edited gene. The team next bred those that did have the edit and eventually established a line of pigs with both copies of the CD163 gene disabled. Although CRISPR-Cas9 can have off-target effects, the piglets seemed normal. They happily chomped away at food and gained weight at a steady pace.

The edited gene persisted through generations, meaning that farmers who breed the pigs can expect it to last. The company’s experimental stations already house 435 edited of PRRS-resistant pigs, a population that could rapidly expand to thousands.

To reach supermarkets, however, Genus has regulatory hoops to jump through.

So far, the FDA has approved two genetically modified meats. One is the AquAdvantage salmon, which has a gene from another fish species to make it grow faster. Another is a GalSafe pig that is less likely to trigger allergic responses.

The agency is also tentatively considering other gene-edited farm animals under investigational food use authorization. In 2022, it declared that CRISPR-edited beef cattle—which have shorter fur coats—don’t pose a risk “to people, animals, the food supply and the environment.” But getting full approval will be a multi-year process with a hefty price tag.

“We have to go through the full, complete review system at FDA. There are no shortcuts for us,” said Clint Nesbitt, who governs regulatory affairs at the company. Meanwhile, they’re also eyeing pork-loving Colombia and China as potential markets.

Once cleared, Genus hopes to widely distribute their pigs to the livestock industry. An easy way is to ship semen from gene-edited males to breed with natural females, which would produce PRRS-resistant piglets after a few generations—basically, selective breeding on the fast track.

In the end, consumers will have the final say. Genetically modified foods have historically been polarizing. But because CRISPRed pork mimics a gene mutation that could potentially occur naturally—even though it hasn’t been documented in the animals—the public may be more open to the new meat.

As the method heads towards approval, the team is considering a similar strategy for tackling other viral diseases in livestock, such as the flu (yes, pigs get it too).

“Applying CRISPR-Cas to eliminate a viral disease represents a major step toward improving animal health,” wrote the team.

Image Credit: Pascal Debrunner / Unsplash

Gravity Experiments on the Kitchen Table: Why a Tiny, Tiny Measurement May Be a Big Leap Forward for Physics

0

Just over a week ago, European physicists announced they had measured the strength of gravity on the smallest scale ever.

In a clever tabletop experiment, researchers at Leiden University in the Netherlands, the University of Southampton in the UK, and the Institute for Photonics and Nanotechnologies in Italy measured a force of around 30 attonewtons on a particle with just under half a milligram of mass. An attonewton is a billionth of a billionth of a newton, the standard unit of force.

The researchers say the work could “unlock more secrets about the universe’s very fabric” and may be an important step toward the next big revolution in physics.

But why is that? It’s not just the result: it’s the method, and what it says about a path forward for a branch of science critics say may be trapped in a loop of rising costs and diminishing returns.

Gravity

From a physicist’s point of view, gravity is an extremely weak force. This might seem like an odd thing to say. It doesn’t feel weak when you’re trying to get out of bed in the morning!

Still, compared with the other forces that we know about—such as the electromagnetic force that is responsible for binding atoms together and for generating light, and the strong nuclear force that binds the cores of atoms—gravity exerts a relatively weak attraction between objects.

And on smaller scales, the effects of gravity get weaker and weaker.

It’s easy to see the effects of gravity for objects the size of a star or planet, but it is much harder to detect gravitational effects for small, light objects.

The Need to Test Gravity

Despite the difficulty, physicists really want to test gravity at small scales. This is because it could help resolve a century-old mystery in current physics.

Physics is dominated by two extremely successful theories.

The first is general relativity, which describes gravity and spacetime at large scales. The second is quantum mechanics, which is a theory of particles and fields—the basic building blocks of matter—at small scales.

These two theories are in some ways contradictory, and physicists don’t understand what happens in situations where both should apply. One goal of modern physics is to combine general relativity and quantum mechanics into a theory of “quantum gravity.”

One example of a situation where quantum gravity is needed is to fully understand black holes. These are predicted by general relativity—and we have observed huge ones in space—but tiny black holes may also arise at the quantum scale.

At present, however, we don’t know how to bring general relativity and quantum mechanics together to give an account of how gravity, and thus black holes, work in the quantum realm.

New Theories and New Data

A number of approaches to a potential theory of quantum gravity have been developed, including string theory, loop quantum gravity, and causal set theory.

However, these approaches are entirely theoretical. We currently don’t have any way to test them via experiments.

To empirically test these theories, we’d need a way to measure gravity at very small scales where quantum effects dominate.

Until recently, performing such tests was out of reach. It seemed we would need very large pieces of equipment: even bigger than the world’s largest particle accelerator, the Large Hadron Collider, which sends high-energy particles zooming around a 27-kilometer loop before smashing them together.

Tabletop Experiments

This is why the recent small-scale measurement of gravity is so important.

The experiment conducted jointly between the Netherlands and the UK is a “tabletop” experiment. It didn’t require massive machinery.

The experiment works by floating a particle in a magnetic field and then swinging a weight past it to see how it “wiggles” in response.

This is analogous to the way one planet “wiggles” when it swings past another.

By levitating the particle with magnets, it can be isolated from many of the influences that make detecting weak gravitational influences so hard.

The beauty of tabletop experiments like this is they don’t cost billions of dollars, which removes one of the main barriers to conducting small-scale gravity experiments, and potentially to making progress in physics. (The latest proposal for a bigger successor to the Large Hadron Collider would cost $17 billion.)

Work to Do

Tabletop experiments are very promising, but there is still work to do.

The recent experiment comes close to the quantum domain, but doesn’t quite get there. The masses and forces involved will need to be even smaller to find out how gravity acts at this scale.

We also need to be prepared for the possibility that it may not be possible to push tabletop experiments this far.

There may yet be some technological limitation that prevents us from conducting experiments of gravity at quantum scales, pushing us back toward building bigger colliders.

Back to the Theories

It’s also worth noting some of the theories of quantum gravity that might be tested using tabletop experiments are very radical.

Some theories, such as loop quantum gravity, suggest space and time may disappear at very small scales or high energies. If that’s right, it may not be possible to carry out experiments at these scales.

After all, experiments as we know them are the kinds of things that happen at a particular place, across a particular interval of time. If theories like this are correct, we may need to rethink the very nature of experimentation so we can make sense of it in situations where space and time are absent.

On the other hand, the very fact we can perform straightforward experiments involving gravity at small scales may suggest that space and time are present after all.

Which will prove true? The best way to find out is to keep going with tabletop experiments, and to push them as far as they can go.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Garik BarseghyanPixabay

This Week’s Awesome Tech Stories From Around the Web (Through March 2)

ARTIFICIAL INTELLIGENCE

Google DeepMind’s New Generative Model Makes Super Mario-Like Games From Scratch
Will Douglas Heaven | MIT Technology Review
“OpenAI’s recent reveal of its stunning generative model Sora pushed the envelope of what’s possible with text-to-video. Now Google DeepMind brings us text-to-video games. The new model, called Genie, can take a short description, a hand-drawn sketch, or a photo and turn it into a playable video game in the style of classic 2D platformers like Super Mario Bros.”

ROBOTICS

Figure Rides the Humanoid Robot Hype Wave to $2.6B Valuation
Brian Heater | TechCrunch
“[On Thursday] Figure confirmed long-standing rumors that it’s been raising more money than God. The Bay Area-based robotics firm announced a $675 million Series B round that values the startup at $2.6 billion post-money. The lineup of investors is equally impressive. It includes Microsoft, OpenAI Startup Fund, Nvidia, Amazon Industrial Innovation Fund, Jeff Bezos (through Bezos Expeditions), Parkway Venture Capital, Intel Capital, Align Ventures and ARK Invest. It’s a mind-boggling sum of money for what remains a still-young startup, with an 80-person headcount. That last bit will almost certainly change with this round.”

SCIENCE

How First Contact With Whale Civilization Could Unfold
Ross Andersen | The Atlantic
“One night last winter, over drinks in downtown Los Angeles, the biologist David Gruber told me that human beings might someday talk to sperm whales. …Gruber said that they hope to record billions of the animals’ clicking sounds with floating hydrophones, and then to decipher the sounds’ meaning using neural networks. I was immediately intrigued. For years, I had been toiling away on a book about the search for cosmic civilizations with whom we might communicate. This one was right here on Earth.”

TRANSPORTATION

RIP Apple Car. This Is Why It Died
Aarian Marshall | Wired
“After a decade of rumors, secretive developments, executive entrances and exits, and pivots, Apple reportedly told employees yesterday that its car project, internally called ‘Project Titan,’ is no more. …’Prototypes are easy, volume production is hard, positive cash flow is excruciating,’ Tesla CEO Elon Musk tweeted a few years back. It’s a lesson that would-be car companies—as well as Tesla—seem to learn again and again. Even after a decade of work, Apple never quite got to the first step.”

TECH

Apple Revolutionized the Auto Industry Without Selling a Single Car
Matteo Wong | The Atlantic
“Apple is so big, and its devices so pervasive, that it didn’t need to sell a single vehicle in order to transform the automobile industry—not through batteries and engines, but through software. The ability to link your smartphone to your car’s touch screen, which Apple pioneered 10 years ago, is now standard. Virtually every leading car company has taken an Apple-inspired approach to technology, to such a degree that ‘smartphone on wheels’ has become an industry cliché. The Apple Car already exists, and you’ve almost certainly ridden in one.”

CRYPTOCURRENCY

Bitcoin Surges Toward All-Time High as Everyone Forgets What Happened Last Time
Matt Novak | Gizmodo
“Bitcoin’s price surged past $63,000 and then receded just a bit under on Wednesday, reaching a level the crypto coin hasn’t seen since November 2021. While it still has a little way to climb to reach an all-time high of $68,000, that level feels comfortably within reach. And if you’re feeling uneasy about the rally, given what happened two years ago, you’re not alone.”

ROBOTICS

High-Speed Humanoid Feels Like a Step Change in Robotics
Loz Blain | New Atlas
“You’ve seen a ton of videos of humanoid robots—but this one feels different. It’s Sanctuary’s Phoenix bot, with ‘the world’s best robot hands,’ working totally autonomously at near-human speeds—much faster than Tesla’s or Figure’s robots.

COMPUTING

The Mindblowing Experience of a Chatbot That Answers Instantly
Steven Levy | Wired
“Groq makes chips optimized to speed up the large language models that have captured our imaginations and stoked our fears in the past year. …The experience of using a chatbot that doesn’t need even a few seconds to generate a response is shocking. I typed in a straightforward request, as you do with LLMs these days: Write a musical about AI and dentistry. I had hardly stopped typing before my screen was filled with a detailed blueprint for the two-act Mysteries of the Mouth.”

SECURITY

Here Come the AI Worms
Matt Burgess | Wired
“In a demonstration of the risks of connected, autonomous AI ecosystems, a group of researchers have created one of what they claim are the first generative AI worms—which can spread from one system to another, potentially stealing data or deploying malware in the process. ‘It basically means that now you have the ability to conduct or to perform a new kind of cyberattack that hasn’t been seen before,’ says Ben Nassi, a Cornell Tech researcher behind the research.”

Image Credit: Diego PH / Unsplash

Has the Lunar Gold Rush Begun? Why the First Private Moon Landing Matters

0

People have long dreamed of a bustling space economy stretching across the solar system. That vision came a step closer last week after a private spacecraft landed on the moon for the first time.

Since the start of the space race in the second half of last century, exploring beyond Earth’s orbit has been the domain of national space agencies. While private companies like SpaceX have revolutionized the launch industry, their customers are almost exclusively satellite operators seeking to provide imaging and communications services back on Earth.

But in recent years, a growing number of companies have started looking further afield, encouraged by NASA. The US space agency is eager to foster a commercial space exploration industry to help it lower the cost of upcoming missions.

And now, the program has started paying dividends after a NASA-funded mission from startup Intuitive Machines saw their Nova-C lander, which they named Odysseus, become the first privately developed spacecraft to successfully complete a soft landing on the moon’s surface.

“We’ve fundamentally changed the economics of landing on the moon,” CEO and cofounder Steve Altemus said at a news conference following the landing. “And we’ve kicked open the door for a robust, thriving cislunar economy in the future.”

Despite the momentous nature of the achievement, the touchdown wasn’t as smooth as the company may have hoped. Odysseus came in much faster than expected and missed its intended landing spot, which resulted in the spacecraft toppling over on one side. That meant some of its antennae ended up pointing at the ground, limiting the vehicle’s ability to communicate.

It turned out that this was because engineers had forgotten to flick a safety switch before launch, disabling the spacecraft’s range-finding lasers. This meant they had to jury rig a new landing system that relied on optical cameras while the mission was already underway. The company acknowledged to Reuters that a pre-flight check of the lasers would have averted the problem, but this was skipped because it would have been time-consuming and costly.

In hindsight, that might seem like an easily avoidable hiccup, but this kind of cost-consciousness is exactly why NASA is backing smaller private firms. The mission received $118 million from the agency via its Commercial Lunar Payload Services (CLPS) program, which is paying various private space firms to ferry cargo to the moon for its upcoming, manned Artemis missions.

The Intuitive Machines mission cost around $200 million, which is significantly less than what a NASA-led mission would. But it’s not just bargain prices the agency is after; it also wants providers that can launch more quickly, and the redundancy that comes from having multiple options.

Other companies involved include Astrobotic, which nearly clinched the title of first private company on the moon before propulsion problems scuppered its January mission, and Firefly Aerospace, which is due to launch its first cargo mission later this year.

NASA leaning on private companies to help complete its missions is nothing new. But both the agency and the companies themselves see this as something more than simple one-off launch contracts.

“The goal here is for us to investigate the moon in preparation for Artemis, and really to do business differently for NASA,” Sue Lederer, CLPS project scientist said during a recent press conference, according to Space.com. “One of our main goals is to make sure that we develop a lunar economy.”

What that economy would look like is still unclear. Alongside NASA instruments, Odysseus was carrying six commercial payloads, including sculptures made by artist Jeff Koons, a “secure lunar repository” of humanity’s knowledge, and an insulating material called Omni-Heat Infinity made by Columbia Sportswear.

Writing for The Conversation, David Flannery, a planetary scientist at Queensland University of Technology in Australia, suggests that once the novelty wears off, more publicity-focused payloads may prove to be an unreliable source of income. Government contracts will likely make up the bulk of these companies’ revenue, but for a true lunar economy to get into gear, that won’t be enough.

Another possibility that’s often touted is mining for local resources. Candidates include water ice, which can be used to support astronauts or create hydrogen fuel for rockets, or helium-3, a material that can be used to create ultra-cold cryogenic refrigerators or potentially be used as fuel in putative future fusion reactors.

Whether that ever turns out to be practical remains to be seen, but Altemus says the rapid progress we’ve seen since the US declared the moon a strategic interest in 2018 makes him optimistic.

“Today, over a dozen companies are building landers,” he told the BBC. “In turn, we’ve seen an increase in payloads, science instruments, and engineering systems being built for the moon. We are seeing that economy start to catch up because the prospect of landing on the moon exists.”

Image Credit: NASA JPL

Gene Silencing Slashes Cholesterol in Mice—No Gene Edits Required

0

With just one shot, scientists have slashed cholesterol levels in mice. The treatment lasted for at least half their lives.

The shot may sound like gene editing, but it’s not. Instead, it relies on an up-and-coming method to control genetic activity—without directly changing DNA letters. Called “epigenetic editing,” the technology targets the molecular machinery that switches genes on or off.

Rather than rewriting genetic letters, which can cause unintended DNA swaps, epigenetic editing could potentially be safer as it leaves the cell’s original DNA sequences intact. Scientists have long eyed the method as an alternative to CRISPR-based editing to control genetic activity. But so far, it has only been proven to work in cells grown in petri dishes.

The new study, published this week in Nature, is a first proof of concept that the strategy also works inside the body. With just a single dose of the epigenetic editor infused into the bloodstream, the mice’s cholesterol levels rapidly dropped, and stayed low for nearly a year without notable side effects.

High cholesterol is a major risk factor for heart attacks, strokes, and blood vessel diseases. Millions of people rely on daily medication to keep its levels in check, often for years or even decades. A simple, long-lasting shot could be a potential life-changer.

“The advantage here is that it’s a one-and-done treatment, instead of taking pills every day,” study author Dr. Angelo Lombardo at the San Raffaele Scientific Institute told Nature.

Beyond cholesterol, the results showcase the potential of epigenetic editing as a powerful emerging tool to tackle a wide range of diseases, including cancer.

To Dr. Henriette O’Geen at the University of California, Davis, it’s “the beginning of an era of getting away from cutting DNA” but still silencing genes that cause disease, paving the way for a new family of cures.

Leveling Up

Gene editing is revolutionizing biomedical science, with CRISPR-Cas9 leading the charge. In the last few months, the United Kingdom and the US have both given the green light for a CRISPR-based gene editing therapy for sickle cell disease and beta thalassemia.

These therapies work by replacing a dysfunctional gene with a healthy version. While effective, this requires cutting through DNA strands, which could lead to unexpected snips elsewhere in the genome. Some have even dubbed CRISPR-Cas9 a type of “genomic vandalism.”

Editing the epigenome sidesteps these problems.

Literally meaning “above” the genome, epigenetics is the process by which cells control gene expression. It’s how cells form different identities—becoming, for example, brain, liver, or heart cells—during early development, even though all cells harbor the same genetic blueprint. Epigenetics also connects environmental factors—such as diet—with gene expression by flexibly controlling gene activity.

All this relies on myriad chemical “tags” that mark our genes. Each tag has a specific function. Methylation, for example, shuts a gene down. Like sticky notes, the tags can be easily added or removed with the help of their designated proteins—without mutating DNA sequences—making it an intriguing way to manipulate gene expression.

Unfortunately, the epigenome’s flexibility could also be its downfall for designing a long-term treatment.

When cells divide, they hold onto all their DNA—including any edited changes. However, epigenetic tags are often wiped out, allowing new cells to start with a clean slate. It’s not so problematic in cells that normally don’t divide once mature—for example, neurons. But for cells that constantly renew, such as liver cells, any epigenetic edits could rapidly dwindle.

Researchers have long debated whether epigenetic editing is durable enough to work as a drug. The new study took the concern head on by targeting a gene highly expressed in the liver.

Teamwork

Meet PCSK9, a protein that keeps low-density lipoprotein (LDL), or “bad cholesterol,” in check. Its gene has long been in the crosshairs for lowering cholesterol in both pharmaceutical and gene editing studies, making it a perfect target for epigenetic control.

“It’s a well-known gene that needs to be shut off to decrease the level of cholesterol in the blood,” said Lombardo.

The end goal is to artificially methylate the gene and thus silence it. The team first turned to a family of designer molecules called zinc-finger proteins. Before the advent of CRISPR-based tools, these were a favorite for manipulating genetic activity.

Zinc-finger proteins can be designed to specifically home in on genetic sequences like a bloodhound. After screening many possibilities, the team found an efficient candidate that specifically targets PCSK9 in liver cells. They then linked this “carrier” to three protein fragments that collaborate to methylate DNA.

The fragments were inspired by a group of natural epigenetic editors that spring to life during early embryo development. Relics of past infections, our genome has viral sequences dotted throughout that are passed down through generations. Methylation silences this viral genetic “junk,” with effects often lasting an entire lifetime. In other words, nature has already come up with a long-lasting epigenetic editor, and the team tapped into its genius solution.

To deliver the editor, the researchers encoded the protein sequences into a single designer mRNA sequence—which the cells can use to produce new copies of the proteins, like in mRNA vaccines—and encapsulated it in a custom nanoparticle. Once injected into mice, the nanoparticles made their way into the liver and released their payloads. Liver cells rapidly adjusted to the new command and made the proteins that shut down PCSK9 expression.

In just two months, the mice’s PCSK9 protein levels dropped by 75 percent. The animals’ cholesterol also rapidly decreased and stayed low until the end of the study nearly a year later. The actual duration could be far longer.

Unlike gene editing, the strategy is hit-and-run, explained Lombardo. The epigenetic editors didn’t stay around inside the cell, but their therapeutic effects lingered.

As a stress test, the team performed a surgical procedure causing the liver cells to divide. This could potentially wipe out the edit. But they found it lasted multiple generations, suggesting the edited cells formed a “memory” of sorts that is heritable.

Whether these long-lasting results would translate to humans is unknown. We have far longer lifespans compared to mice and may require multiple shots. Specific aspects of the epigenetic editor also need to be reworked to better tailor them for human genes.

Meanwhile, other attempts at slashing high cholesterol levels using base editing—a type of gene editing—have already shown promise in a small clinical trial.

But the study adds to the burgeoning field of epigenetic editors. About a dozen startups are focusing on the strategy to develop therapies for a wide range of diseases, with one already in clinical trials to combat stubborn cancers.

As far as they know, the scientists believe it’s the first time someone has shown a one-shot approach can lead to long-lasting epigenetic effects in living animals, Lombardo said. “It opens up the possibility of using the platform more broadly.”

Image Credit: Google DeepMind / Unsplash

Amazon’s Billion-Dollar Investment Arm Targets Generative AI in Robotics

0

Last year, Amazon announced the next step for its growing robotic workforce. A new system, dubbed Sequioa, linked robots from across a warehouse into a single automated team that the company said significantly increased the efficiency of its operations.

The tech giant is now looking to fund a newer, smarter generation of robots. In an interview with The Financial Times, Amazon’s Franziska Bossart said the company’s billion-dollar industrial innovation fund will accelerate investments in startups combining AI and robotics.

“Generative AI holds a lot of promise for robotics and automation,” said Bossart, who heads up the fund. “[It’s an area] we are going to focus on this year.”

Generative Anything

Generative AI is, of course, still hot.

Google, Microsoft, Meta and others are battling for an early lead in the tech popularized by OpenAI’s ChatGPT. The algorithms are well-known for generating text, images, and video. But researchers believe their potential is greater. Anything with sufficiently large amounts of data is fair game. This could be the molecular structures of proteins—as we’ve seen—or the mechanical positioning data that helps robots complete real-world tasks.

Recent experiments combining generative AI and robots have already begun to yield some interesting results.

At its simplest, this has involved giving an existing robot a chatbot interface. Thanks to an internet’s worth of training data, the robot is now able to recognize nearby objects and understand nuanced commands. In a Boston Dynamics demo last year, one of the company’s robots became a tour guide thanks to ChatGPT. The bot could assume different personalities and make surprising connections it wasn’t explicitly coded for, like suggesting they consult the IT desk for a question it couldn’t answer.

Other potential applications in robotics include the generation of complex and varied simulations to train robots how to move in the physical world. Similarly, generative algorithms might also make their way into the systems controlling a robot’s movement. Early examples include Dobb-E, a robot that learns tasks from iPhone video data.

Of course, AI for images, text, and video has a clear advantage: Humanity has been stocking the internet with examples for years. Data for robots? Not so much. But that may not be the case much longer. Google and UC Berkeley’s RT-X project is assembling data from 32 robotics labs to build a GPT-4-like foundation model for robotics.

All this has begun to stir up interest from researchers and investors. And it seems Amazon, with its long track record developing and employing robots, is no exception.

Amazon End Effector

A billion dollars ain’t what it used to be. As of today, there are six technology companies valued over a trillion dollars. AI startups are attracting investments in the billions. Indeed, Amazon has separately committed up to $4 billion to OpenAI competitor Anthropic.

Still, that Amazon plans to direct significant funds into AI and robotics startups is notable. For young companies, tens of millions of dollars can be make-or-break. This is especially true given slowing venture capital investments across tech the last year.

Amazon’s industrial innovation fund, announced in 2022, has already invested in robotics startups, including Agility Robotics. The company, whose Digit robots are being tested in Amazon warehouses, opened a factory to mass-produce the robots last year. It also released a video showing how it might sprinkle in some generative AI magic.

Though there’s no official number on how much cash the Amazon fund still has at the ready, a report in The Wall Street Journal last year suggests there’s a good bit of room to run.

Bossart didn’t mention companies of interest or what kinds of tasks robots using generative AI might accomplish for Amazon. She said the fund would go after startups that help Amazon’s broad goals of increasing efficiency, safety, and delivery speed. Investments will also include a focus on “last mile” deliveries. (Agility’s Digit robot made early headlines for its potential to deliver packages to doorsteps.)

Amazon isn’t alone in its efforts to combine AI and robotics. Google, OpenAI, and others are likewise investing in the area. But of the big tech companies Amazon has the most obvious practical need for robotics in its operations, which may shape its investments and even provide a ready market for new products in its warehouses or delivery vans.

Even as AI chatbots and image and video generating algorithms continue to drive the flashiest headlines—it’s worth keeping an eye on AI in robotics too.

Image Credit: Agility

Could Shipwrecked Tardigrades Have Colonized the Moon?

0

Just over five years ago, on February 22, 2019, an unmanned space probe was placed in orbit around the moon. Named Beresheet and built by SpaceIL and Israel Aerospace Industries, it was intended to be the first private spacecraft to perform a soft landing. Among the probe’s payload were tardigrades, renowned for their ability to survive in even the harshest climates.

The mission ran into trouble from the start, with the failure of “star tracker” cameras intended to determine the spacecraft’s orientation and thus properly control its motors. Budgetary limitations had imposed a pared-down design, and while the command center was able to work around some problems, things got even trickier on April 11, the day of the landing.

On the way to the moon the spacecraft had been traveling at high speed, and it needed to be slowed way down to make a soft landing. Unfortunately during the braking maneuver a gyroscope failed, blocking the primary engine. At an altitude of 150 meters, Beresheet was still moving at 500 kilometers per hour, far too fast to be stopped in time. The impact was violent—the probe shattered, and its remains were scattered over a distance of around a hundred meters. We know this because the site was photographed by NASA’s LRO (Lunar Reconnaissance Orbiter) satellite on April 22.

Before and after images taken by NASA’s Lunar Reconnaissance Orbiter (LRO) of the Beresheet crash site. Image Credit: NASA/GSFC/Arizona State University

Animals That Can Withstand (Almost) Anything

So, what happened to the tardigrades that were traveling on the probe? Given their remarkable abilities to survive situations that would kill pretty much any other animal, could they have contaminated the moon? Worse, might they be able to reproduce and colonize it?

Tardigrades are microscopic animals that measure less than a millimeter in length. All have neurons, a mouth opening at the end of a retractable proboscis, an intestine containing a microbiota and four pairs of non-articulated legs ending in claws, and most have two eyes. As small as they are, they share a common ancestor with arthropods such as insects and arachnids.

Most tardigrades live in aquatic environments, but they can be found in any environment, even urban ones. Emmanuelle Delagoutte, a researcher at the French National Center for Scientific Research (CNRS), collects them in the mosses and lichens of the Jardin des Plantes in Paris. To be active, feed on microalgae such as chlorella, and move, grow, and reproduce, tardigrades need to be surrounded by a film of water. They reproduce sexually or asexually via parthenogenesis (from an unfertilized egg) or even hermaphroditism, when an individual (which possesses both male and female gametes) self-fertilizes. Once the egg has hatched, the active life of a tardigrade lasts from 3 to 30 months. A total of 1,265 species have been described, including two fossils.

Tardigrades are famous for their resistance to conditions that exist neither on Earth nor on the moon. They can shut down their metabolism by losing up to 95 percent of their body water. Some species synthesize a sugar, trehalose, that acts as an antifreeze, while others synthesize proteins that are thought to incorporate cellular constituents into an amorphous “glassy” network that offers resistance and protection to each cell.

During dehydration, a tardigrade’s body can shrink to half its normal size. The legs disappear, with only the claws still visible. This state, known as cryptobiosis, persists until conditions for active life become favorable again.

Depending on the species of tardigrade, individuals need more or less time to dehydrate and not all specimens of the same species manage to return to active life. Dehydrated adults survive for a few minutes at temperatures as low as -272°C or as high as 150°C and, over the long term, at high doses of gamma rays of 1,000 or 4,400 gray (Gy). By way of comparison, a dose of 10 Gy is fatal for humans, and 40-50,000 Gy sterilizes all types of material. However, whatever the dose, radiation kills tardigrade eggs. What’s more, the protection afforded by cryptobiosis is not always clear-cut, as in the case of Milnesium tardigradum, where radiation affects both active and dehydrated animals in the same way.

Image of the species Milnesium tardigradum in its active state. Image Credit: Schokraie E, Warnken U, Hotz-Wagenblatt A, Grohme MA, Hengherr S, et al. (2012), CC BY

Lunar Life?

So, what happened to the tardigrades after they crashed on the moon? Are any of them still viable, buried under the moon’s regolith, the dust that varies in depth from a few meters to several dozen meters?

First of all, they have to have survived the impact. Laboratory tests have shown that frozen specimens of the Hypsibius dujardini species traveling at 3,000 kilometers per hour in a vacuum were fatally damaged when they smashed into sand. However, they survived impacts of 2,600 kilometers per hour or less—and their “hard landing” on the moon, though unwanted, was far slower.

The moon’s surface is not protected from solar particles and cosmic rays, particularly gamma rays, but here too, the tardigrades would be able to resist. In fact, Robert Wimmer-Schweingruber, professor at the University of Kiel in Germany, and his team have shown that the doses of gamma rays hitting the lunar surface are permanent but low compared with the doses mentioned above—10 years’ exposure to gamma rays would correspond to a total dose of around 1 Gy.

Finally, the tardigrades would have to withstand a lack of water as well as temperatures ranging from -170 to -190°C during the lunar night and 100 to 120°C during the day. A lunar day or night lasts a long time, just under 15 Earth days. The probe itself wasn’t designed to withstand such extremes, and even if it hadn’t crashed, it would have ceased all activity after just a few Earth days.

Unfortunately for the tardigrades, they can’t overcome the lack of liquid water, oxygen, and microalgae—they would never be able to reactivate, much less reproduce. Their colonizing the moon is thus impossible. Still, inactive specimens are on lunar soil and their presence raises ethical questions, as Matthew Silk, an ecologist at the University of Edinburgh, points out. Moreover, at a time when space exploration is taking off in all directions, contaminating other planets could mean we would lose the opportunity to detect extraterrestrial life.

The author thanks Emmanuelle Delagoutte and Cédric Hubas of the Muséum de Paris, and Robert Wimmer-Schweingruber of the University of Kiel, for their critical reading of the text and their advice.

This article is republished from The Conversation under a Creative Commons license. Read the original article in English here or as originally published in French here

Image Credit: Schokraie E, Warnken U, Hotz-Wagenblatt A, Grohme MA, Hengherr S, et al. (2012), CC B

This Week’s Awesome Tech Stories From Around the Web (Through February 24)

0

COMPUTING

Nvidia Hardware Is Eating the World
Lauren Goode | Wired
“Talking to Jensen Huang should come with a warning label. The Nvidia CEO is so invested in where AI is headed that, after nearly 90 minutes of spirited conversation, I came away convinced the future will be a neural net nirvana. I could see it all: a robot renaissance, medical godsends, self-driving cars, chatbots that remember.”

SPACE

The Odysseus Lunar Landing Brings NASA One Step Closer to Putting Boots on the Moon
Jeffrey Kluger | Time
“The networks made much of that 52-year gulf in cosmic history, but Odysseus was significant for two other, more substantive reasons: it marked the first time a spacecraft built by a private company, not by a governmental space program, had managed a lunar landing, and it was the first time any ship had visited a spot so far in the moon’s south, down in a region where ice is preserved in permanently shadowed craters.”

BIOTECH

First Gene-Edited Meat Will Come From Disease-Proof CRISPR Pigs
Michael Le Page | New Scientist
“Pigs that are immune to a disease estimated to cost farmers $2.7 billion a year globally look set to become the first genetically modified farm animals to be used for large-scale meat production. ‘We could very well be the first,’ says Clint Nesbitt of international breeding company Genus, which has created hundreds of the CRISPR-edited pigs in preparation for a commercial launch.”

TECH

Artificial Investment
Elizabeth Lopatto | The Verge
“The AI marketing hype, arguably kicked off by OpenAI’s ChatGPT, has reached a fever pitch: investors and executives have stratospheric expectations for the technology. But the higher the expectations, the easier it is to disappoint. The stage is set for 2024 to be a year of reckoning for AI, as business leaders home in on what AI can actually do right now.”

ENERGY

Scientists Claim AI Breakthrough to Generate Boundless Clean Fusion Energy
Mirjam Guesgen | Vice
“There are many stumbling blocks on the racetrack to nuclear fusion, the reaction at the core of the sun that combines atoms to make energy: Generating more energy than it takes to power the reactors, developing reactor-proof building materials, keeping the reactor free from impurities, and restraining that fuel within it, to name a few. Now, researchers from Princeton University and its Princeton Plasma Physics Laboratory have developed an AI model that could solve that last problem.”

ARTIFICIAL INTELLIGENCE

Google’s AI Boss Says Scale Only Gets You So Far
Will Knight | Wired
“‘My belief is, to get to AGI, you’re going to need probably several more innovations as well as the maximum scale,’ Google DeepMind CEO Demis Hassabis said. ‘There’s no let up in the scaling, we’re not seeing an asymptote or anything. There are still gains to be made. So my view is you’ve got to push the existing techniques to see how far they go, but you’re not going to get new capabilities like planning or tool use or agent-like behavior just by scaling existing techniques. It’s not magically going to happen.'”

COMPUTING

The Quest for a DNA Data Drive
Rob Carlson | IEEE Spectrum
“Data is piling up exponentially, and the rate of information production is increasing faster than the storage density of tape, which will only be able to keep up with the deluge of data for a few more years. …Fortunately, we have access to an information storage technology that is cheap, readily available, and stable at room temperature for millennia: DNA, the material of genes. In a few years your hard drive may be full of such squishy stuff.”

SECURITY

GPT-4 Developer Tool Can Hack Websites Without Human Help
Jeremy Hsu | New Scientist
“That suggests individuals or organizations without hacking expertise could unleash AI agents to carry out cyber attacks. ‘You literally don’t need to understand anything—you can just let the agent go hack the website by itself,’ says Daniel Kang at the University of Illinois Urbana-Champaign. ‘We think this really reduces the expertise needed to use these large language models in malicious ways.'”

TECH

It’s the End of the Web as We Know It
Christopher Mims | The Wall Street Journal
“For decades, seeking knowledge online has meant googling it and clicking on the links the search engine offered up. …But AI is changing all of that, and fast. A new generation of AI-powered ‘answer engines’ could make finding information easier, by simply giving us the answers to our questions rather than forcing us to wade through pages of links. Meanwhile, the web is filling up with AI-generated content of dubious quality. It’s polluting search results, and making traditional search less useful.”

ENERGY

Is This New 50-Year Battery for Real?
Rhett Allain | Wired
“Wouldn’t it be cool if you never had to charge your cell phone? I’m sure that’s what a lot of people were thinking recently, when a company called BetaVolt said it had developed a coin-sized ‘nuclear battery’ that would last for 50 years. Is it for real? Yes it is. Will you be able to buy one of these forever phones anytime soon? Probably not, unfortunately, because—well, physics. Let’s see why.”

Image Credit: Luke Stackpoole / Unsplash

Elon Musk Says First Neuralink Patient Can Move Computer Cursor With Mind

0

Neural interfaces could present an entirely new way for humans to connect with technology. Elon Musk says the first human user of his startup Neuralink’s brain implant can now move a mouse cursor using their mind alone.

While brain-machine interfaces have been around for decades, they have primarily been research tools that are far too complicated and cumbersome for everyday use. But in recent years, a number of startups have cropped up promising to develop more capable and convenient devices that could help treat a host of conditions.

Neuralink is one of the firms leading that charge. Last September, the company announced it had started recruiting for the first clinical trial of its device after receiving clearance from the US Food and Drug Administration earlier in the year. And in a discussion on his social media platform X last week, Musk announced the company’s first patient was already able to control a cursor roughly a month after implantation.

“Progress is good, patient seems to have made a full recovery…and is able to control the mouse, move the mouse around the screen just by thinking,” Musk said, according to CNN. “We’re trying to get as many button presses as possible from thinking, so that’s what we’re currently working on.”

Controlling a cursor with a brain implant is nothing new—an academic team achieved the same feat as far back as 2006. And competitor Synchron, which makes a BMI that is implanted through the brain’s blood vessels, has been running a trial since 2021 in which volunteers have been able to control computers and smartphones using their mind alone.

Musk’s announcement nonetheless represents rapid progress for a company that only unveiled its first prototype in 2019. And while the company’s technology works on similar principles to previous devices, it promises far higher precision and ease of use.

That’s because each chip features 1,024 electrodes split between 64 threads thinner than a human hair that are inserted into the brain by a “sewing machine-like” robot. That is far more electrodes per unit volume than any previous BMI, which means the device should be capable of recording from many individual neurons at once.

And while most previous BMIs required patients be wired to bulky external computers, the company’s N1 implant is wireless and features a rechargeable battery. That makes it possible to record brain activity during everyday activities, greatly expanding the research potential and prospects for using it as a medical device.

Recording from individual neurons is a capability that has mainly been restricted to animal studies so far, Wael Asaad, a professor of neurosurgery and neuroscience at Brown University, told The Brown Daily Herald, so being able to do the same in humans would be a significant advance.

“For the most part, when we work with humans, we record from what are called local field potentials—which are larger scale recordings—and we’re not actually listening to individual neurons,” he said. “Higher resolution brain interfaces that are fully wireless and allow two-way communication with the brain are going to have a lot of potential uses.”

In the initial clinical trial, the device’s electrodes will be implanted in a brain region associated with motor control. But Musk has espoused much more ambitious goals for the technology, such as treating psychiatric disorders like depression, allowing people to control advanced prosthetic limbs, or even making it possible to eventually merge our minds with computers.

There’s probably a long way to go before that’s in the cards though, Justin Sanchez, from nonprofit research organization Battelle, told Wired. Decoding anything more complicated than basic motor signals or speech will likely require recording from many more neurons in different regions, most likely using multiple implants.

“There’s a huge gap between what is being done today in a very small subset of neurons versus understanding complex thoughts and more sophisticated cognitive kinds of things,” Sanchez said.

So, as impressive as the company’s progress has been so far, it’s likely to be some time before the technology is employed for anything other than a narrow set of medical applications, particularly given its invasiveness. That means most of us will be stuck with our touchscreens for the foreseeable future.

Image Credit: Neuralink

Like a Child, This Brain-Inspired AI Can Explain Its Reasoning

0

Children are natural scientists. They observe the world, form hypotheses, and test them out. Eventually, they learn to explain their (sometimes endearingly hilarious) reasoning.

AI, not so much. There’s no doubt that deep learning—a type of machine learning loosely based on the brain—is dramatically changing technology. From predicting extreme weather patterns to designing new medications or diagnosing deadly cancers, AI is increasingly being integrated at the frontiers of science.

But deep learning has a massive drawback: The algorithms can’t justify their answers. Often called the “black box” problem, this opacity stymies their use in high-risk situations, such as in medicine. Patients want an explanation when diagnosed with a life-changing disease. For now, deep learning-based algorithms—even if they have high diagnostic accuracy—can’t provide that information.

To open the black box, a team from the University of Texas Southwestern Medical Center tapped the human mind for inspiration. In a study in Nature Computational Science, they combined principles from the study of brain networks with a more traditional AI approach that relies on explainable building blocks.

The resulting AI acts a bit like a child. It condenses different types of information into “hubs.” Each hub is then transcribed into coding guidelines for humans to read—CliffsNotes for programmers that explain the algorithm’s conclusions about patterns it found in the data in plain English. It can also generate fully executable programming code to try out.

Dubbed “deep distilling,” the AI works like a scientist when challenged with a variety of tasks, such as difficult math problems and image recognition. By rummaging through the data, the AI distills it into step-by-step algorithms that can outperform human-designed ones.

“Deep distilling is able to discover generalizable principles complementary to human expertise,” wrote the team in their paper.

Paper Thin

AI sometimes blunders in the real world. Take robotaxis. Last year, some repeatedly got stuck in a San Francisco neighborhood—a nuisance to locals, but still got a chuckle. More seriously, self-driving vehicles blocked traffic and ambulances and, in one case, terribly harmed a pedestrian.

In healthcare and scientific research, the dangers can be high too.

When it comes to these high-risk domains, algorithms “require a low tolerance for error,” the American University of Beirut’s Dr. Joseph Bakarji, who was not involved in the study, wrote in a companion piece about the work.

The barrier for most deep learning algorithms is their inexplicability. They’re structured as multi-layered networks. By taking in tons of raw information and receiving countless rounds of feedback, the network adjusts its connections to eventually produce accurate answers.

This process is at the heart of deep learning. But it struggles when there isn’t enough data or if the task is too complex.

Back in 2021, the team developed an AI that took a different approach. Called “symbolic” reasoning, the neural network encodes explicit rules and experiences by observing the data.

Compared to deep learning, symbolic models are easier for people to interpret. Think of the AI as a set of Lego blocks, each representing an object or concept. They can fit together in creative ways, but the connections follow a clear set of rules.

By itself, the AI is powerful but brittle. It heavily relies on previous knowledge to find building blocks. When challenged with a new situation without prior experience, it can’t think out of the box—and it breaks.

Here’s where neuroscience comes in. The team was inspired by connectomes, which are models of how different brain regions work together. By meshing this connectivity with symbolic reasoning, they made an AI that has solid, explainable foundations, but can also flexibly adapt when faced with new problems.

In several tests, the “neurocognitive” model beat other deep neural networks on tasks that required reasoning.

But can it make sense of data and engineer algorithms to explain it?

A Human Touch

One of the hardest parts of scientific discovery is observing noisy data and distilling a conclusion. This process is what leads to new materials and medications, deeper understanding of biology, and insights about our physical world. Often, it’s a repetitive process that takes years.

AI may be able to speed things up and potentially find patterns that have escaped the human mind. For example, deep learning has been especially useful in the prediction of protein structures, but its reasoning for predicting those structures is tricky to understand.

“Can we design learning algorithms that distill observations into simple, comprehensive rules as humans typically do?” wrote Bakarji.

The new study took the team’s existing neurocognitive model and gave it an additional talent: The ability to write code.

Called deep distilling, the AI groups similar concepts together, with each artificial neuron encoding a specific concept and its connection to others. For example, one neuron might learn the concept of a cat and know it’s different than a dog. Another type handles variability when challenged with a new picture—say, a tiger—to determine if it’s more like a cat or a dog.

These artificial neurons are then stacked into a hierarchy. With each layer, the system increasingly differentiates concepts and eventually finds a solution.

Instead of having the AI crunch as much data as possible, the training is step-by-step—almost like teaching a toddler. This makes it possible to evaluate the AI’s reasoning as it gradually solves new problems.

Compared to standard neural network training, the self-explanatory aspect is built into the AI, explained Bakarji.

In a test, the team challenged the AI with a classic video game—Conway’s Game of Life. First developed in the 1970s, the game is about growing a digital cell into various patterns given a specific set of rules (try it yourself here). Trained on simulated game-play data, the AI was able to predict potential outcomes and transform its reasoning into human-readable guidelines or computer programming code.

The AI also worked well in a variety of other tasks, such as detecting lines in images and solving difficult math problems. In some cases, it generated creative computer code that outperformed established methods—and was able to explain why.

Deep distilling could be a boost for physical and biological sciences, where simple parts give rise to extremely complex systems. One potential application for the method is as a co-scientist for researchers decoding DNA functions. Much of our DNA is “dark matter,” in that we don’t know what—if any—role it has. An explainable AI could potentially crunch genetic sequences and help geneticists identify rare mutations that cause devastating inherited diseases.

Outside of research, the team is excited at the prospect of stronger AI-human collaboration.

Neurosymbolic approaches could potentially allow for more human-like machine learning capabilities,” wrote the team.

Bakarji agrees. The new study goes “beyond technical advancements, touching on ethical and societal challenges we are facing today.” Explainability could work as a guardrail, helping AI systems sync with human values as they’re trained. For high-risk applications, such as medical care, it could build trust.

For now, the algorithm works best when solving problems that can be broken down into concepts. It can’t deal with continuous data, such as video streams.

That’s the next step in deep distilling, wrote Bakarji. It “would open new possibilities in scientific computing and theoretical research.”

Image Credit: 7AV 7AV / Unsplash 

Google Just Released Two Open AI Models That Can Run on Laptops

0

Last year, Google united its AI units in Google DeepMind and said it planned to speed up product development in an effort to catch up to the likes of Microsoft and OpenAI. The stream of releases in the last few weeks follows through on that promise.

Two weeks ago, Google announced the release of its most powerful AI to date, Gemini Ultra, and reorganized its AI offerings, including its Bard chatbot, under the Gemini brand. A week later, they introduced Gemini Pro 1.5, an updated Pro model that largely matches Gemini Ultra’s performance and also includes an enormous context window—the amount of data you can prompt it with—for text, images, and audio.

Today, the company announced two new models. Going by the name Gemma, the models are much smaller than Gemini Ultra, weighing in at 2 and 7 billion parameters respectively. Google said the models are strictly text-based—as opposed to multimodal models that are trained on a variety of data, including text, images, and audio—outperform similarly sized models, and can be run on a laptop, desktop, or in the cloud. Before training, Google stripped datasets of sensitive data like personal information. They also fine-tuned and stress-tested the trained models pre-release to minimize unwanted behavior.

The models were built and trained with the same technology used in Gemini, Google said, but in contrast, they’re being released under an open license.

That doesn’t mean they’re open-source. Rather, the company is making the model weights available so developers can customize and fine-tune them. They’re also releasing developer tools to help keep applications safe and make them compatible with major AI frameworks and platforms. Google says the models can be employed for responsible commercial usage and distribution—as defined in the terms of use—for organizations of any size.

If Gemini is aimed at OpenAI and Microsoft, Gemma likely has Meta in mind. Meta is championing a more open model for AI releases, most notably for its Llama 2 large language model. Though sometimes confused for an open-source model, Meta has not released the dataset or code used to train Llama 2. Other more open models, like the Allen Institute for AI’s (AI2) recent OLMo models, do include training data and code. Google’s Gemma release is more akin to Llama 2 than OLMo.

“[Open models have] become pretty pervasive now in the industry,” Google’s Jeanine Banks said in a press briefing. “And it often refers to open weights models, where there is wide access for developers and researchers to customize and fine-tune models but, at the same time, the terms of use—things like redistribution, as well as ownership of those variants that are developed—vary based on the model’s own specific terms of use. And so we see some difference between what we would traditionally refer to as open source and we decided that it made the most sense to refer to our Gemma models as open models.”

Still, Llama 2 has been influential in the developer community, and open models from the likes of French startup, Mistral, and others are pushing performance toward state-of-the-art closed models, like OpenAI’s GPT-4. Open models may make more sense in enterprise contexts, where developers can better customize them. They’re also invaluable for AI researchers working on a budget. Google wants to support such research with Google Cloud credits. Researchers can apply for up to $500,000 in credits toward larger projects.

Just how open AI should be is still a matter of debate in the industry.

Proponents of a more open ecosystem believe the benefits outweigh the risks. An open community, they say, can not only innovate at scale, but also better understand, reveal, and solve problems as they emerge. OpenAI and others have argued for a more closed approach, contending the more powerful the model, the more dangerous it could be out in the wild. A middle road might allow an open AI ecosystem but more tightly regulate it.

What’s clear is both closed and open AI are moving at a quick pace. We can expect more innovation from big companies and open communities as the year progresses.

Image Credit: Google

This Week’s Awesome Tech Stories From Around the Web (Through February 17)

0

ARTIFICIAL INTELLIGENCE

OpenAI Teases an Amazing New Generative Video Model Called Sora
Will Douglas Heaven | MIT Technology Review
“OpenAI has built a striking new generative video model called Sora that can take a short text description and turn it into a detailed, high-definition film clip up to a minute long. …The sample videos from OpenAI’s Sora are high-definition and full of detail. OpenAI also says it can generate videos up to a minute long. One video of a Tokyo street scene shows that Sora has learned how objects fit together in 3D: the camera swoops into the scene to follow a couple as they walk past a row of shops.”

ARTIFICIAL INTELLIGENCE

Google’s Flagship AI Model Gets a Mighty Fast Upgrade
Will Knight | Wired
“Google says Gemini Pro 1.5 can ingest and make sense of an hour of video, 11 hours of audio, 700,000 words, or 30,000 lines of code at once—several times more than other AI models, including OpenAI’s GPT-4, which powers ChatGPT. …Gemini Pro 1.5 is also more capable—at least for its size—as measured by the model’s score on several popular benchmarks. The new model exploits a technique previously invented by Google researchers to squeeze out more performance without requiring more computing power.”

ROBOTICS

Surgery in Space: Tiny Remotely Operated Robot Completes First Simulated Procedure at the Space Station
Taylor Nicioli and Kristin Fisher | CNN
“The robot, known as spaceMIRA—which stands for Miniaturized In Vivo Robotic Assistant—performed several operations on simulated tissue at the orbiting laboratory while remotely operated by surgeons from approximately 250 miles (400 kilometers) below in Lincoln, Nebraska. The milestone is a step forward in developing technology that could have implications not just for successful long-term human space travel, where surgical emergencies could happen, but also for establishing access to medical care in remote areas on Earth.”

VIRTUAL REALITY

Our Unbiased Take on Mark Zuckerberg’s Biased Apple Vision Pro Review
Kyle Orland | Ars Technica
“Zuckerberg’s Instagram-posted thoughts on the Vision Pro can’t be considered an impartial take on the device’s pros and cons. Still, Zuckerberg’s short review included its fair share of fair points, alongside some careful turns of phrase that obscure the Quest’s relative deficiencies. To figure out which is which, we thought we’d consider each of the points made by Zuckerberg in his review. In doing so, we get a good viewpoint on the very different angles from which Meta and Apple are approaching mixed-reality headset design.”

FUTURE

Things Get Strange When AI Starts Training Itself
Matteo Wong | The Atlantic
“Over the past few months, Google DeepMind, Microsoft, Amazon, Meta, Apple, OpenAI, and various academic labs have all published research that uses an AI model to improve another AI model, or even itself, in many cases leading to notable improvements. Numerous tech executives have heralded this approach as the technology’s future.”

BIOTECH

Single-Dose Gene Therapy May Stop Deadly Brain Disorders in Their Tracks
Paul McClure | New Atlas
“Researchers have developed a single-dose genetic therapy that can clear protein blockages that cause motor neurone disease, also called amyotrophic lateral sclerosis, and frontotemporal dementia, two incurable neurodegenerative diseases that eventually lead to death. …The researchers found that, in mice, a single dose of CTx1000 targeted only the ‘bad’ [version of the protein] TDP-43, leaving the healthy version of it alone. Not only was it safe, it was effective even when symptoms were present at the time of treatment.”

SCIENCE FICTION

Spike Jonze’s Her Holds Up a Decade Later
Sheon Han | The Verge
“Spike Jonze’s sci-fi love story is still a better depiction of AI than many of its contemporaries. …Upon rewatching it, I noticed that this pre-AlphaGo film holds up beautifully and still offers a wealth of insight. It also doesn’t shy away from the murky and inevitably complicated feelings we’ll have toward AI, and Jonze first expressed those over a decade ago.”

TECH

OpenAI Wants to Eat Google Search’s Lunch
Maxwell Zeff | Gizmodo
“OpenAI is reportedly developing a search app that would directly compete with Google Search, according to The Information on Wednesday. The AI search engine could be a new feature for ChatGPT, or a potentially separate app altogether. Microsoft Bing would allegedly power the service from Sam Altman, which could be the most serious threat Google Search has ever faced.”

SPACE

Here’s What a Solar Eclipse Looks Like on Mars
Isaac Schultz | Gizmodo
“Typically, the Perseverance rover is looking down, scouring the Martian terrain for rocks that may reveal aspects of the planet’s ancient past. But over the last several weeks, the intrepid robot looked up and caught two remarkable views: solar eclipses on the Red Planet, as the moon Phobos and Deimos passed in front of the sun.”

Image Credit: Neeqolah Creative Works / Unsplash

Why the New York Times’ AI Copyright Lawsuit Will Be Tricky to Defend

0

The New York Times’ (NYT) legal proceedings against OpenAI and Microsoft has opened a new frontier in the ongoing legal challenges brought on by the use of copyrighted data to “train” or improve generative AI.

There are already a variety of lawsuits against AI companies, including one brought by Getty Images against Stability AI, which makes the Stable Diffusion online text-to-image generator. Authors George R.R. Martin and John Grisham have also brought legal cases against ChatGPT owner OpenAI over copyright claims. But the NYT case is not “more of the same” because it throws interesting new arguments into the mix.

The legal action focuses in on the value of the training data and a new question relating to reputational damage. It is a potent mix of trademarks and copyright and one which may test the fair use defenses typically relied upon.

It will, no doubt, be watched closely by media organizations looking to challenge the usual “let’s ask for forgiveness, not permission” approach to training data. Training data is used to improve the performance of AI systems and generally consists of real-world information, often drawn from the internet.

The lawsuit also presents a novel argument—not advanced by other, similar cases—that’s related to something called “hallucinations,” where AI systems generate false or misleading information but present it as fact. This argument could in fact be one of the most potent in the case.

The NYT case in particular raises three interesting takes on the usual approach. First, that due to their reputation for trustworthy news and information, NYT content has enhanced value and desirability as training data for use in AI.

Second, that due to the NYT’s paywall, the reproduction of articles on request is commercially damaging. Third, that ChatGPT hallucinations are causing reputational damage to the New York Times through, effectively, false attribution.

This is not just another generative AI copyright dispute. The first argument presented by the NYT is that the training data used by OpenAI is protected by copyright, and so they claim the training phase of ChatGPT infringed copyright. We have seen this type of argument run before in other disputes.

Fair Use?

The challenge for this type of attack is the fair-use shield. In the US, fair use is a doctrine in law that permits the use of copyrighted material under certain circumstances, such as in news reporting, academic work, and commentary.

OpenAI’s response so far has been very cautious, but a key tenet in a statement released by the company is that their use of online data does indeed fall under the principle of “fair use.”

Anticipating some of the difficulties that such a fair-use defense could potentially cause, the NYT has adopted a slightly different angle. In particular, it seeks to differentiate its data from standard data. The NYT intends to use what it claims to be the accuracy, trustworthiness, and prestige of its reporting. It claims that this creates a particularly desirable dataset.

It argues that as a reputable and trusted source, its articles have additional weight and reliability in training generative AI and are part of a data subset that is given additional weighting in that training.

It argues that by largely reproducing articles upon prompting, ChatGPT is able to deny the NYT, which is paywalled, visitors and revenue it would otherwise receive. This introduction of some aspect of commercial competition and commercial advantage seems intended to head off the usual fair-use defense common to these claims.

It will be interesting to see whether the assertion of special weighting in the training data has an impact. If it does, it sets a path for other media organizations to challenge the use of their reporting in the training data without permission.

The final element of the NYT’s claim presents a novel angle to the challenge. It suggests that damage is being done to the NYT brand through the material that ChatGPT produces. While almost presented as an afterthought in the complaint, it may yet be the claim that causes OpenAI the most difficulty.

This is the argument related to AI hallucinations. The NYT argues that this is compounded because ChatGPT presents the information as having come from the NYT.

The newspaper further suggests that consumers may act based on the summary given by ChatGPT, thinking the information comes from the NYT and is to be trusted. The reputational damage is caused because the newspaper has no control over what ChatGPT produces.

This is an interesting challenge to conclude with. Hallucination is a recognized issue with AI generated responses, and the NYT is arguing that the reputational harm may not be easy to rectify.

The NYT claim opens a number of lines of novel attack which move the focus from copyright on to how the copyrighted data is presented to users by ChatGPT and the value of that data to the newspaper. This is much trickier for OpenAI to defend.

This case will be watched closely by other media publishers, especially those behind paywalls, and with particular regard to how it interacts with the usual fair-use defense.

If the NYT dataset is recognized as having the “enhanced value” it claims to, it may pave the way for monetization of that dataset in training AI rather than the “forgiveness, not permission” approach prevalent today.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: AbsolutVision / Unsplash 

Scientists Say New Hybrid Beef Rice Could Cost Just a Dollar per Pound

0

Here’s a type of fusion food you don’t see every day: fluffy, steamed grains of rice, chock-full of beef cells.

It sounds Frankenstein. But the hybrid plant-animal concoction didn’t require any genetic engineering—just a hefty dose of creativity. Devised by Korean scientists, the avant-garde grains are like lab-grown meat with a dose of carbohydrates.

The hybrid rice includes grains grown with beef muscle cells and fatty tissue. Steamed together, the resulting bowl has a light pink hue and notes of cream, butter, coconut oil, and a rich beefy umami.

The rice also packs a nutritional punch, with more carbohydrates, protein, and fat than normal rice. It’s like eating rice with a small bite of beef brisket. Compared to lab-grown meat, the hybrid rice is relatively easy to grow, taking less than a week to make a small batch.

It is also surprisingly affordable. One analysis showed the market price of hybrid rice with full production would be roughly a dollar per pound. All ingredients are edible and meet food safety guidelines in Korea.

Rice is a staple food in much of the world. Protein, however, isn’t. Hybrid rice could supply a dose of much-needed protein without raising more livestock.

“Imagine obtaining all the nutrients we need from cell-cultured protein rice,” said study author Sohyeon Park at Yonsei University in a press release.

The study is the latest entry into a burgeoning field of “future foods”—with lab-grown meat being a headliner—that seek to cut down carbon dioxide emissions while meeting soaring global demand for nutritious food.

“There has been a surge of interest over the past five years in developing alternatives to conventional meat with lower environmental impacts,” said Dr. Neil Ward, an agri-food and climate specialist at the University of East Anglia who was not involved in the study. “This line of research holds promise for the development of healthier and more climate-friendly diets in future.”

Future Food

Many of us share a love for a juicy steak or a glistening burger.

But raising livestock puts enormous pressure on the environment. Their digestion and manure produce significant greenhouse gas emissions, contributing to climate change. They consume copious amounts of resources and land. With standards of living rising across many countries and an ever-increasing global population, demand for protein is rapidly growing.

How can we balance the need to feed a growing world with long-term sustainability? Here’s where “future foods” come in. Scientists have been cooking up all sorts of new-age recipes. Algae, cricket-derived proteins, and 3D-printed food are heading to a futuristic cookbook near you. Lab-grown chicken has already graced menus in upscale restaurants in Washington DC and San Francisco. Meat grown inside soy beans and other nuts has been approved in Singapore.

The problem with nut-based scaffolds, explained the team in their paper, is that they can trigger allergies. Rice, in contrast, has very few allergens. The grain grows rapidly and is a culinary staple for much of the world. While often viewed as a carbohydrate, rice also contains fats, proteins, and minerals such as calcium and magnesium.

“Rice already has a high nutrient level,” said Park. But better yet, it has a structure that can accommodate other cells—including those from animals.

Rice, Rice, Baby

The structure of a single grain of rice is like an urban highway system inside a dome. “Roads” crisscross the grain, intersecting at points but also leaving an abundance of empty space.

This structure provides lots of surface area and room for beef cells to grow, wrote the team. Like a 3D scaffold, the “roads” nudge cells in a certain direction, eventually populating most of the rice grain.

Animal cells and rice proteins don’t normally mix well. To get beef cells to stick to the rice scaffold, the team added a layer of glue made of fish gelatin, a neutral-tasting ingredient commonly used as a thickener in cooking in many Asian countries. The coating linked starchy molecules inside the rice grains to the beef cells and melted away after steaming the grains.

The study used muscle and fat cells. For seven days, the cells rested at the bottom of the rice, mingling with the grains. They thrived, growing twice as fast as they would in a petri dish.

“I didn’t expect the cells to grow so well in the rice,” said Park in the press release.

Rice can rapidly go soft and mushy inside liquids. But the fishy coating withstood the nutrient bath and supported the rice’s internal scaffolds, allowing the beef cells—either muscle or fat—to grow.

Beefy Rice

Future foods need to be tasty to catch on. This includes texture.

Like variations of pasta, different types of rice have a different bite. The hybrid rice expanded after cooking, but with more chew. When boiled or steamed, it was a bit harder and more brittle than normal rice, but with a nutty, slightly sweet and savory taste.

Compared to normal supermarket rice, the hybrid rice packed a nutritious punch. Its carbohydrate, protein, and fat levels all increased, with protein getting the biggest boost.

Eating 100 grams (3.5 ounces) of the hybrid rice is like eating the same amount of plain rice with a bite of lean beef, the authors wrote in the paper.

For all future foods, cost is the elephant in the room. The team did their homework. Their hybrid rice could have a production cycle of just three months, perhaps even shorter with optimized growing procedures. It’s also cost-effective. Rice is far more affordable than beef, and if commercialized, they estimate the price could be around a dollar a pound.

Although the scientists used beef cells in this study, a similar strategy could be used to grow chicken, shrimp, or other proteins inside rice.

Future foods offer a path towards sustainability (although some researchers have questioned the climate impact of lab-grown meat). The new study suggests engineered food can reduce the environmental impact of raising livestock. Even with lab procedures, the carbon footprint for growing hybrid rice is a fraction of farming.

While beef-scented rice may not be for everyone, the team is already envisioning “microbeef sushi” using the beef-rice hybrid or producing the grain as a “complete meal.” Because the ingredients are food safe, hybrid rice may easily navigate food regulations on its way to a supermarket near you.

“Now I see a world of possibilities for this grain-based hybrid food. It could one day serve as food relief for famine, military ration, or even space food,” said Park.

Image Credit: Dr. Jinkee Hong / Yonsei University

These Glow-in-the-Dark Flowers Will Make Your Garden Look Like Avatar

0

The sci-fi dream that gardens and parks would one day glow like Pandora, the alien moon in Avatar, is decades old. Early attempts to splice genes into plants to make them glow date back to the 1980s, but experiments emitted little light and required special food.

Then in 2020, scientists made a breakthrough. Adding genes from luminous mushrooms yielded brightly glowing specimens that needed no special care. The team has refined the approach—writing last month they’ve increased their plants’ luminescence as much as 100-fold—and spun out a startup called Light Bio to sell them.

Light Bio received USDA approval in September and this month announced the first continuously glowing plant, named the firefly petunia, is officially available for purchase in the US. The petunias look and grow like their ordinary cousins—green leaves, white flowers—but after sunset, they glow a gentle green. The company is selling the plants for $29 on its website and says a crop of 50,000 will ship in April.

“This is an incredible achievement for synthetic biology. Light Bio is bringing us leaps and bounds closer to our solarpunk dream of living in Avatar’s Pandora,” Jason Kelly, CEO and co-founder of Ginkgo Bioworks, a Light Bio partner, said in a statement.

Glow Up

In synthetic biology, glowing plants and animals have been a staple for years. Scientists will often insert a gene to make an organism glow as visual proof that some intended biological process has taken effect. Keith Wood, Light Bio cofounder and CEO, was a pioneer of the approach in plants. In 1986, he gave tobacco plants a firefly gene that produces luciferin, the molecule behind the bugs’ signature glow. Those plants glowed weakly, but needed special plant food to provide fuel for the chemical reaction. Later work tried genes from bioluminescent bacteria instead, but the plants were similarly dim.

Then in 2020, a team including Light Bio cofounders Karen Sarkisyan and Ilia Yampolsky turned to the luminous mushroom, Neonothopanus nambi. The mushroom runs a chemical reaction involving caffeic acid—a molecule also commonly found in plants—to produce luciferin and light. The scientists spliced the associated genes into tobacco plants and found the plants glowed too, no extra ingredients needed.

They later tried the genes in petunias, found the effect was even more pronounced, and began refining their work. In a paper published in Nature Methods in January, the team added genes from other mushrooms and employed directed evolution to further enhance the luminescence. After experimentation with a few collections of genes, they landed on a combination that worked in multiple species and significantly upped the brightness.

From here, they hope to further increase the luminescence by as much as 10-fold, add different colors to the lineup, and expand their work into different plant varieties.

Lab to Living Room

The plants are a scientific achievement, but the creation and approval of a commercial product is also noteworthy. Prior attempts to offer people glowing plants, including a popular 2013 Kickstarter, failed to materialize.

Last fall, the USDA gave Light Bio the go-ahead to sell their firefly petunias to the general public. The approval concluded the plants as described didn’t pose new risks to agriculture compared to naturally occurring petunias.

Jennifer Kuzma, codirector of the Genetic Engineering and Society Center at North Carolina State University, told Wired last year she would have liked the USDA to do a more thorough review. But scientists recently contacted by Nature did not voice major concerns. The plants are largely grown indoors or in gardens and aren’t considered invasive, lowering the risk the new genes would make their way into other species. Though, as Kuzma noted, that risk may depend on how many are grown and where they take root.

Beyond household appeal, the system at work here could also find its way into agricultural applications. Diego Orzáez, a plant biologist in Spain, is extending the luciferase system to other plants. He envisions such plants beginning to glow only when they’re in trouble, allowing farmers to take quick visual stock of crop health with drones or satellites.

Other new genetically modified plants are headed our way soon too. As of this month, gardeners can buy seeds for bioengineered purple tomatoes high in antioxidants. Another startup is developing a genetically engineered houseplant to filter harmful chemicals from the air. And Pairwise is using CRISPR to make softer kale, seedless berries, and pitless cherries.

“People’s reactions to genetically modified plants are complicated,” Steven Burgess, a plant biologist at the University of Illinois Urbana–Champaign, told Nature. That’s due, in part, to the association with controversial corporations and worry about what we put in our bodies. The new glow-in-the-dark petunias are neither the product of a big company—indeed, Sarkisyan said Light Bio doesn’t plan to be overly combative when it comes to people sharing plant cuttings—nor are they food. But they are compelling.

“They invite people to experience biotechnology from a position of wonder,” Drew Endy told Wired. Apart from conjuring popular sci-fi, perhaps such examples can introduce a wider audience to the possibilities and risks of synthetic biology, kickstart thoughtful conversations, and help people decide for themselves where to draw lines.

Image Credit: Light Bio

AI Is Everywhere—Including Countless Applications You’ve Likely Never Heard Of

0

Artificial intelligence is seemingly everywhere. Right now, generative AI in particular—tools like Midjourney, ChatGPT, Gemini (previously Bard), and others—is at the peak of hype.

But as an academic discipline, AI has been around for much longer than just the last couple of years. When it comes to real-world applications, many have stayed hidden or relatively unknown. These AI tools are much less glossy than fantasy-image generators—yet they are also ubiquitous.

As various AI technologies continue to progress, we’ll only see an increase of AI use in various industries. This includes healthcare and consumer tech, but also more concerning uses, such as warfare. Here’s a rundown of some of the wide-ranging AI applications you may be less familiar with.

AI in Healthcare

Various AI systems are already being used in the health field, both to improve patient outcomes and to advance health research.

One of the strengths of computer programs powered by artificial intelligence is their ability to sift through and analyze truly enormous data sets in a fraction of the time it would take a human—or even a team of humans—to accomplish.

For example, AI is helping researchers comb through vast genetic data libraries. By analyzing large data sets, geneticists can home in on genes that could contribute to various diseases, which in turn will help develop new diagnostic tests.

AI is also helping to speed up the search for medical treatments. Selecting and testing treatments for a particular disease can take ages, so leveraging AI’s ability to comb through data can be helpful here, too.

For example, United States-based non-profit Every Cure is using AI algorithms to search through medical databases to match up existing medications with illnesses they might potentially work for. This approach promises to save significant time and resources.

The Hidden AIs

Outside medical research, other fields not directly related to computer science are also benefiting from AI.

At CERN, home of the Large Hadron Collider, a recently developed advanced AI algorithm is helping physicists tackle some of the most challenging aspects of analyzing the particle data generated in their experiments.

Last year, astronomers used an AI algorithm for the first time to identify a “potentially hazardous” asteroid—a space rock that might one day collide with Earth. This algorithm will be a core part of the operations of the Vera C. Rubin Observatory currently under construction in Chile.

One major area of our lives that uses largely “hidden” AI is transportation. Millions of flights and train trips are coordinated by AI all over the world. These AI systems are meant to optimize schedules to reduce costs and maximize efficiency.

Artificial intelligence can also manage real-time road traffic by analyzing traffic patterns, volume and other factors, and then adjusting traffic lights and signals accordingly. Navigation apps like Google Maps also use AI optimization algorithms to find the best path in their navigation systems.

AI is also present in various everyday items. Robot vacuum cleaners use AI software to process all their sensor inputs and deftly navigate our homes.

The most cutting-edge cars use AI in their suspension systems so passengers can enjoy a smooth ride.

Of course, there is also no shortage of more quirky AI applications. A few years ago, UK-based brewery startup IntelligentX used AI to make custom beers for its customers. Other breweries are also using AI to help them optimize beer production.

And Meet the Ganimals is a “collaborative social experiment” from MIT Media Lab, which uses generative AI technologies to come up with new species that have never existed before.

AI Can Also Be Weaponized

On a less lighthearted note, AI also has many applications in defense. In the wrong hands, some of these uses can be terrifying.

For example, some experts have warned AI can aid the creation of bioweapons. This could happen through gene sequencing, helping non-experts easily produce risky pathogens such as novel viruses.

Where active warfare is taking place, military powers can design warfare scenarios and plans using AI. If a power uses such tools without applying ethical considerations or even deploys autonomous AI-powered weapons, it could have catastrophic consequences.

AI has been used in missile guidance systems to maximize the effectiveness of a military’s operations. It can also be used to detect covertly operating submarines.

In addition, AI can be used to predict and identify the activities and movements of terrorist groups. This way, intelligence agencies can come up with preventive measures. Since these types of AI systems have complex structures, they require high-processing power to get real-time insights.

Much has also been said about how generative AI is supercharging people’s abilities to produce fake news and disinformation. This has the potential to affect the democratic process and sway the outcomes of elections.

AI is present in our lives in so many ways, it is nearly impossible to keep track. Its myriad applications will affect us all.

This is why ethical and responsible use of AI, along with well-designed regulation, is more important than ever. This way we can reap the many benefits of AI while making sure we stay ahead of the risks.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Michael Dziedzic / Unsplash

An Antibiotic You Inhale Can Deliver Medication Deep Into the Lungs

0

We’ve all been more aware of lung health since Covid-19.

However, for people with asthma and chronic obstructive pulmonary disease (COPD), dealing with lung problems is a lifelong struggle. Those with COPD suffer from highly inflamed lung tissue that swells and obstructs airways, making it hard to breathe. The disease is common, with more than three million annual cases in the US alone.

Although manageable, there is no cure. One problem is that lungs with COPD pump out tons of viscous mucus, which forms a barrier preventing treatments from reaching lung cells. The slimy substance—when not coughed out—also attracts bacteria, further aggravating the condition.

A new study in Science Advances describes a potential solution. Scientists have developed a nanocarrier to shuttle antibiotics into the lungs. Like a biological spaceship, the carrier has “doors” that open and release antibiotics inside the mucus layer to fight infections.

The “doors” themselves are also deadly. Made from a small protein, they rip apart bacterial membranes and clean up their DNA to rid lung cells of chronic infection.

The team engineered an inhalable version of an antibiotic using the nanocarrier. In a mouse model of COPD, the treatment revived their lung cells in just three days. Their blood oxygen levels returned to normal, and previous signs of lung damage slowly healed.

“This immunoantibacterial strategy may shift the current paradigm of COPD management,” the team wrote in the article.

Breathe Me

Lungs are extremely delicate. Picture thin but flexible layers of cells separated into lobes to help coordinate oxygen flow into the body. Once air flows through the windpipe, it rapidly disperses among a complex network of branches, filling thousands of air sacs that supply the body with oxygen while ridding it of carbon dioxide.

These structures are easily damaged, and smoking is a common trigger. Cigarette smoke causes surrounding cells to pump out a slimy substance that obstructs the airway and coats air sacs, making it difficult for them to function normally.

In time, the mucus builds a sort of “glue” that attracts bacteria and condenses into a biofilm. The barrier further blocks oxygen exchange and changes the lung’s environment into one favorable for bacteria growth.

One way to stop the downward spiral is to obliterate the bacteria. Broad-spectrum antibiotics are the most widely used treatment. But because of the slimy protective layer, they can’t easily reach bacteria deep inside lung tissues. Even worse, long-term treatment increases the chance of antibiotic resistance, making it even more difficult to wipe out stubborn bacteria.

But the protective layer has a weakness: It’s just a little bit too sour. Literally.

Open-Door Policy

Like a lemon, the slimy layer is slightly more acidic compared to healthy lung tissue. This quirk gave the team an idea for an ideal antibiotic carrier that would only release its payload in an acidic environment.

The team made hollow nanoparticles out of silica—a flexible biomaterial—filled them with a common antibiotic, and added “doors” to release the drugs.

These openings are controlled by additional short protein sequences that work like “locks.” In normal airway and lung environments, they fold up at the door, essentially sequestering the antibiotics inside the bubble.

Released in lungs with COPD, the local acidity changes the structure of the lock protein, so the doors open and release antibiotics directly into the mucus and biofilm—essentially breaking through the bacterial defenses and targeting them on their home turf.

One test with the concoction penetrated a lab-grown biofilm in a petri dish. It was far more effective compared to a previous type of nanoparticle, largely because the carrier’s doors opened once inside the biofilm—in other nanoparticles, the antibiotics remained trapped.

The carriers could also dig deeper into infected areas. Cells have electrical charges. The carrier and mucus both have negative charges, which—like similarly charged ends of two magnets—push the carriers deeper into and through the mucus and biofilm layers.

Along the way, the acidity of the mucus slowly changes the carrier’s charge to positive, so that once past the biofilm, the “lock” mechanism opens and releases medication.

The team also tested the nanoparticle’s ability to obliterate bacteria. In a dish, they wiped out multiple common types of infectious bacteria and destroyed their biofilms. The treatment appeared relatively safe. Tests in human fetal lung cells in a dish found minimal signs of toxicity.

Surprisingly, the carrier itself could also destroy bacteria. Inside an acidic environment, its positive charge broke down bacterial membranes. Like popped balloons, the bugs released genetic material into their surroundings, which the carrier swept up.

Damping the Fire

Bacterial infections in the lungs attract overactive immune cells, which leads to swelling. Blood vessels surrounding air sacs also become permeable, making it easier for dangerous molecules to get through. These changes cause inflammation, making it hard to breathe.

In a mouse model of COPD, the inhalable nanoparticle treatment quieted the overactive immune system. Multiple types of immune cells returned to a healthy level of activation—allowing the mice to switch from a highly inflammatory profile to one that combats infections and inflammation.

Mice treated with the inhalable nanoparticle had about 98 percent less bacteria in their lungs, compared to those given the same antibiotic without the carrier.

Wiping out bacteria gave the mice a sigh of relief. They breathed easier.  Their blood oxygen levels went up, and blood acidity—a sign of dangerously low oxygen—returned to normal.

Under the microscope, treated lungs restored normal structures, with sturdier air sacks that slowly recovered from COPD damage. The treated mice also had less swelling in their lungs from fluid buildup that’s commonly seen in lung injuries.

The results, while promising, are only for a smoking-related COPD model in mice. There’s still much we don’t know about the treatment’s long-term consequences.

Although for now there were no signs of side effects, it’s possible the nanoparticles could accumulate inside the lungs over time eventually causing damage. And though the carrier itself damages bacterial membranes, the therapy mostly relies on the encapsulated antibiotic. With antibiotic resistance on the rise, some drugs are already losing effect for COPD.

Then there’s the chance of mechanical damage over time. Repeatedly inhaling silicon-based nanoparticles could cause lung scarring in the long term. So, while nanoparticles could shift strategies for COPD management, it’s clear we need follow-up studies, the team wrote.

Image Credit: crystal light / Shutterstock.com

This Week’s Awesome Tech Stories From Around the Web (Through February 10)

0

COMPUTING

Sam Altman Seeks Trillions of Dollars to Reshape Business of Chips and AI
Keach Hagey | The Wall Street Journal
“The OpenAI chief executive officer is in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world’s chip-building capacity, expand its ability to power AI, among other things, and cost several trillion dollars, according to people familiar with the matter. The project could require raising as much as $5 trillion to $7 trillion, one of the people said.”

AUTOMATION

AI Is Rewiring Coders’ Brains. Yours May Be Next
Will Knight | Wired
“GitHub’s owner, Microsoft, said in its latest quarterly earnings that there are now 1.3 million paid Copilot accounts—a 30 percent increase over the previous quarter—and noted that 50,000 different companies use the software. Dohmke says the latest usage data from Copilot shows that almost half of all the code produced by users is AI-generated. At the same time, he claims there is little sign that these AI programs can operate without human oversight.”

TECH

Google Prepares for a Future Where Search Isn’t King
Lauren Goode | Wired
“[Sundar] Pichai is…experimenting with a new vision for what Google offers—not replacing search, not yet, but building an alternative to see what sticks. ‘This is how we’ve always approached search, in the sense that as search evolved, as mobile came in and user interactions changed, we adapted to it,’ Pichai says, speaking with Wired ahead of the Gemini launch. ‘In some cases we’re leading users, as we are with multimodal AI. But I want to be flexible about the future, because otherwise we’ll get it wrong.'”

BIOTECH

Turbocharged CAR-T Cells Melt Tumors in Mice—Using a Trick From Cancer Cells
Asher Mullard | Nature
“The team treated mice carrying blood and solid cancers with several T-cell therapies boosted with CARD11–PIK3R3, and watched the animals’ tumors melt away. Researchers typically use around one million cells to treat these mice, says Choi, but even 20,000 of the cancer-mutation-boosted T cells were enough to wipe out tumors. ‘That’s an impressively small number of cells,’ says Nick Restifo, a cell-therapy researcher and chief scientist of the rejuvenation start-up company Marble Therapeutics in Boston, Massachusetts.”

COMPUTING

OpenAI Wants to Control Your Computer
Maxwell Zeff | Gizmodo
“OpenAI is reportedly developing ‘agent software,’ that will effectively take over your device and complete complex tasks on your behalf, according to The Information. OpenAI’s agent would work between multiple apps on your computer, performing clicks, cursor movements, and text typing. It’s really a new type of operating system, and it could change the way you interact with your computer altogether.”

TRANSPORTATION

The New Car Batteries That Could Power the Electric Vehicle Revolution
Nicola Jones | Nature
“Researchers are experimenting with different designs that could lower costs, extend vehicle ranges and offer other improvements. …Chinese manufacturers have announced budget cars for 2024 featuring batteries based not on the lithium that powers today’s best electric vehicles (EVs), but on cheap sodium—one of the most abundant elements in Earth’s crust. And a US laboratory has surprised the world with a dream cell that runs in part on air and could pack enough energy to power airplanes.”

SECURITY

I Stopped Using Passwords. It’s Great—and a Total Mess
Matt Burgess | Wired
“For the past month, I’ve been converting as many of my accounts as possible—around a dozen for now—to use passkeys and start the move away from the password for good. Spoiler: When passkeys work seamlessly, it’s a glimpse of a more secure future for millions, if not billions, of people, and a reinvention of how we sign in to websites and services. But getting there for every account across the internet is still likely to prove a minefield and take some time.”

ENERGY

Momentary Fusion Breakthroughs Face Hard Reality
Edd Gent | IEEE Spectrum
“The dream of fusion power inched closer to reality in December 2022, when researchers at Lawrence Livermore National Laboratory (LLNL) revealed that a fusion reaction had produced more energy than what was required to kick-start it. According to new research, the momentary fusion feat required exquisite choreography and extensive preparations, whose high degree of difficulty reveals a long road ahead before anyone dares hope a practicable power source could be at hand.”

ARTIFICIAL INTELLIGENCE

Meet ‘Smaug-72B’: The New King of Open-Source AI
Michael Nuñez | VentureBeat
“What’s most noteworthy about today’s release is that Smaug-72B outperforms GPT-3.5 and Mistral Medium, two of the most advanced proprietary large language models developed by OpenAI and Mistral, respectively, in several of the most popular benchmarks. While the model still falls short of the 90-100 point average indicative of human-level performance, its birth signals that open-source AI may soon rival Big Tech’s capabilities, which have long been shrouded in secrecy.”

ETHICS

AI-Generated Voices in Robocalls Can Deceive Voters. The FCC Just Made Them Illegal
Ali Swenson | Associated Press
“The [FCC] on Thursday outlawed robocalls that contain voices generated by artificial intelligence, a decision that sends a clear message that exploiting the technology to scam people and mislead voters won’t be tolerated. …The agency’s chairwoman, Jessica Rosenworcel, said bad actors have been using AI-generated voices in robocalls to misinform voters, impersonate celebrities, and extort family members. ‘It seems like something from the far-off future, but this threat is already here,’ Rosenworcel told The AP on Wednesday as the commission was considering the regulations.”

Image Credit: NASA Hubble Space Telescope / Unsplash

It Will Take Only a Single SpaceX Starship to Launch a Space Station

0

SpaceX’s forthcoming Starship rocket will make it possible to lift unprecedented amounts of material into orbit. One of its first customers will be a commercial space station, which will be launched fully assembled in a single mission.

Measuring 400 feet tall and capable of lifting 150 tons to low-Earth orbit, Starship will be the largest and most powerful rocket ever built. But with its first two test launches ending in “rapid unscheduled disassembly”—SpaceX’s euphemism for an explosion—the spacecraft is still a long way from commercial readiness.

That hasn’t stopped customers from signing up for launches. Now, a joint venture between Airbus and Voyager Space that’s building a private space station called Starlab has inked a contract with SpaceX to get it into orbit. The venture plans to put the impressive capabilities of the new rocket to full use by launching the entire 26-foot-diameter space station in one go.

“Starlab’s single-launch solution continues to demonstrate not only what is possible, but how the future of commercial space is happening now,” SpaceX’s Tom Ochinero said in a statement. “The SpaceX team is excited for Starship to launch Starlab to support humanity’s continued presence in low-Earth orbit on our way to making life multiplanetary.”

Starlab is one of several private space stations currently under development as NASA looks to find a replacement for the International Space Station, which is due to be retired in 2030. In 2021, the agency awarded $415 million in funding for new orbital facilities to Voyager Space, Northrop Grumman, and Jeff Bezos’ company Blue Origin. Axiom Space also has a contract with NASA to build a commercial module that will be attached to the ISS in 2026 and then be expanded to become an independent space station around the time its host is decommissioned.

Northrop Grumman and Voyager have since joined forces and brought Airbus on board to develop Starlab together. The space station will only have two modules—a service module that provides energy from solar panels as well as propulsion and a module with quarters for a crew of four and a laboratory. That compares to the 16 modules that make up the ISS. But at roughly twice the diameter of its predecessor, those two modules will still provide half the total volume of the ISS.

The station is designed to provide an orbital base for space agencies like NASA but also private customers and other researchers. The fact that Hilton is helping design the crew quarters suggests they will be catering to space tourists too.

Typically, space stations are launched in parts and assembled in space, but Starlab will instead be fully assembled on the ground. This not only means it will be habitable almost immediately after launch, but it also greatly simplifies the manufacturing process, Voyager CEO Dylan Taylor told Tech Crunch recently.

“Let’s say you have a station that requires multiple launches, and then you’re taking the hardware and you’re assembling it [on orbit],” he said. “Not only is that very costly, but there’s a lot of execution risk around that as well. That’s what we were trying to avoid and we’re convinced that that’s the best way to go.”

As Starship is the only rocket big enough to carry such a large payload in one go, it’s not surprising Voyager has chosen SpaceX, even though the vehicle they’re supposed to fly is still under development. The companies didn’t give a timeline for the launch.

If they pull it off, it would be a major feat of space engineering. But it’s still unclear how economically viable this new generation of private space stations will be. Ars Technica points out that it cost NASA more than $100 billion to build the ISS and another $3 billion a year to operate it.

The whole point of NASA encouraging the development of private space stations is so it can slash that bill, so it’s unlikely to be offering  anywhere near that much cash. The commercial applications for space stations are fuzzy at best, so whether space tourists and researchers will provide enough money to make up the difference remains to be seen.

But spaceflight is much cheaper these days thanks to SpaceX driving down launch costs, and the ability to launch pre-assembled space stations could further slash the overall bill. So, Starlab may well prove the doubters wrong and usher in a new era of commercial space flight.

Image Credit: Voyager Space

Partially Synthetic Moss Paves the Way for Plants With Designer Genomes

0

Synthetic biology is already rewriting life.

In late 2023, scientists revealed yeast cells with half their genetic blueprint replaced by artificial DNA. It was a “watershed” moment in an 18-year-long project to design alternate versions of every yeast chromosome. Despite having seven and a half synthetic chromosomes, the cells reproduced and thrived.

A new study moves us up the evolutionary ladder to designer plants.

For a project called SynMoss, a team in China redesigned part of a single chromosome in a type of moss. The resulting part-synthetic plant grew normally and produced spores, making it one of the first living things with multiple cells to carry a partially artificial chromosome.

The custom changes in the plant’s chromosomes are relatively small compared to the synthetic yeast. But it’s a step towards completely redesigning genomes in higher-level organisms.

In an interview with Science, synthetic biologist Dr. Tom Ellis of Imperial College London said it’s a “wake-up call to people who think that synthetic genomes are only for microbes.”

Upgrading Life

Efforts to rewrite life aren’t just to satisfy scientific curiosity.

Tinkering with DNA can help us decipher evolutionary history and pinpoint critical stretches of DNA that keep chromosomes stable or cause disease. The experiments could also help us better understand DNA’s “dark matter.” Littered across the genome, mysterious sequences that don’t encode proteins have long baffled scientists: Are they useful or just remnants of evolution?

Synthetic organisms also make it easier to engineer living things. Bacteria and yeast, for example, are already used to brew beer and pump out life-saving medications such as insulin. By adding, switching, or deleting parts of the genome, it’s possible to give these cells new capabilities.

In one recent study, for example, researchers reprogrammed bacteria to synthesize proteins using amino acid building blocks not seen in nature. In another study, a team turned bacteria into plastic-chomping Terminators that recycle plastic waste into useful materials.

While impressive, bacteria are made of cells unlike ours—their genetic material floats around, making them potentially easier to rewire.

The Synthetic Yeast Project was a breakthrough. Unlike bacteria, yeast is a eukaryotic cell. Plants, animals, and humans all fall into this category. Our DNA is protected inside a nut-like bubble called a nucleus, making it more challenging for synthetic biologists to tweak.

And as far as eukaryotes go, plants are harder to manipulate than yeast—a single-cell organism—as they contain multiple cell types that coordinate growth and reproduction. Chromosomal changes can play out differently depending on how each cell functions and, in turn, affect the health of the plant.

“Genome synthesis in multicellular organisms remains uncharted territory,” the team wrote in their paper.

Slow and Steady

Rather than building a whole new genome from scratch, the team tinkered with the existing moss genome.

This green fuzz has been extensively studied in the lab. An early analysis of the moss genome found it has 35,000 potential genes—strikingly complex for a plant. All 26 of its chromosomes have been completely sequenced.

For this reason, the plant is a “broadly used model in evolutionary developmental and cell biological studies,” wrote the team.

Moss genes readily adapt to environmental changes, especially those that repair DNA damage from sunlight. Compared to other plants—such as thale cress, another model biologists favor—moss has the built-in ability to tolerate large DNA changes and regenerate faster. Both aspects are “essential” when rewriting the genome, explained the team.

Another perk? The moss can grow into a full plant from a single cell. This ability is a dream scenario for synthetic biologists because altering genes or chromosomes in just one cell can potentially change an entire organism.

Like our own, plant chromosomes look like an “X” with two crossed arms. For this study, the team decided to rewrite the shortest chromosome arm in the plant—chromosome 18. It was still a mammoth project. Previously, the largest replacement was only about 5,000 DNA letters; the new study needed to replace over 68,000 letters.

Replacing natural DNA sequences with “the redesigned large synthetic fragments presented a formidable technical challenge,” wrote the team.

They took a divide-and-conquer strategy. They first designed mid-sized chunks of synthetic DNA before combining them into a single DNA “mega-chunk” of the chromosome arm.

The newly designed chromosome had several notable changes. It was stripped of transposons, or “jumping genes.” These DNA blocks move around the genome, and scientists are still debating if they’re essential for normal biological functions or if they contribute to disease. The team also added DNA “tags” to the chromosome to mark it as synthetic and made changes to how it regulates the manufacturing of certain proteins.

Overall, the changes reduced the size of the chromosome by nearly 56 percent. After inserting the designer chromosome into moss cells, the team nurtured them into adult plants.

A Half-Synthetic Blossom

Even with a heavily edited genome, the synthetic moss was surprisingly normal. The plants readily grew into leafy bushes with multiple branches and eventually produced spores. All reproductive structures were like those found in the wild, suggesting the half-synthetic plants had a normal life cycle and could potentially reproduce.

The plants also maintained their resilience against highly salty environments—a useful adaptation also seen in their natural counterparts.

But the synthetic moss did have some unexpected epigenetic quirks. Epigenetics is the science of how cells turn genes on or off. The synthetic part of the chromosome had a different epigenetic profile compared to natural moss, with more activated genes than usual. This could potentially be harmful, according to the team.

The moss also offered potential insights into DNA’s “dark matter,” including transposons. Deleting these jumping genes didn’t seem to harm the partially synthetic plants, suggesting they might not be essential to their health.

More practically, the results could boost biotechnology efforts using moss to produce a wide range of therapeutic proteins, including ones that combat heart disease, heal wounds, or treat stroke. Moss is already used to synthesize medical drugs. A partially designer genome could alter its metabolism, boost its resilience against infections, and increase yield.

The next step is to replace the entirety of chromosome 18’s short arm with synthetic sequences. They’re aiming to generate an entire synthetic moss genome within 10 years.

It’s an ambitious goal. Compared to the yeast genome, which took 18 years and a global collaboration to rewrite half of it, the moss genome is 40 times bigger. But with increasingly efficient and cheaper DNA reading and synthesis technologies, the goal isn’t beyond reach.

Similar techniques could also inspire other projects to redesign chromosomes in organisms beyond bacteria and yeast, from plants to animals.

Image Credit: Pyrex / Wikimedia Commons

Scientists ‘Astonished’ Yet Another of Saturn’s Moons May Be an Ocean World

0

Liquid water is a crucial prerequisite for life as we know it. When astronomers first looked out into the solar system, it seemed Earth was a special case in this respect. They found enormous balls of gas, desert worlds, blast furnaces, and airless hellscapes. But evidence is growing that liquid water isn’t rare at all—it’s just extremely well-hidden.

The list of worlds with subsurface oceans in our solar system is getting longer by the year. Of course, many people are familiar with the most obvious cases: The icy moons Enceladus and Europa are literally bursting at the seams with water. But other less obvious candidates have joined their ranks, including Callisto, Ganymede, Titan, and even, perhaps, Pluto.

Now, scientists argue in a paper in Nature that we may have reason to add yet another long-shot to the list: Saturn’s “Death Star” moon, Mimas. Nicknamed for the giant impact crater occupying around a third of its diameter, Mimas has been part of the conversation for years. But a lack of clear evidence on its surface made scientists skeptical it could be hiding an interior ocean.

The paper, which contains fresh analysis of observations made by the Cassini probe, says changes in the moon’s orbit over time are best explained by the presence of a global ocean deep below its icy crust. The team believes the data also suggests the ocean is very young, explaining why it has yet to make its presence known on the surface.

“The major finding here is the discovery of habitability conditions on a solar system object which we would never, never expect to have liquid water,” Valery Lainey, first author and scientist at the Observatoire de Paris, told Space.com. “It’s really astonishing.”

The Solar System Is Sopping

How exactly do frozen moons on the outskirts of the solar system come to contain whole oceans of liquid water?

In short: Combine heat and a good amount of ice and you get oceans. We know there is an abundance of ice in the outer solar system, from moons to comets. But heat? Not so much. The further out you go, the more the sun fades into the starry background.

Interior ocean worlds depend on another source of heat—gravity. As they orbit Jupiter or Saturn, enormous gravitational shifts flex and warp their insides. The friction from this grinding, called tidal flexing, produces heat which melts ice to form salty oceans.

And the more we look, the more we find evidence of hidden oceans throughout the outer solar system. Some are thought to have more liquid water than Earth, and where there’s liquid water, there just might be life—at least, that’s what we want to find out.

Yet Another Ocean World?

Speculation that Mimas might be an ocean world isn’t new. A decade ago, small shifts in the moon’s orbit measured by Cassini suggested it either had a strangely pancake-shaped core or an interior ocean. Scientists thought the latter was a long shot because—unlike the cracked but largely crater-free surfaces of Enceladus and Europa—Mimas’s surface is pocked with craters, suggesting it has been largely undisturbed for eons.

The new study aimed for a more precise look at the data to better weigh the possibilities. According to modeling using more accurate calculations, the team found a pancake-shaped core is likely impossible. To fit observations, its ends would have to extend beyond the surface: “This is incompatible with observations,” they wrote.

So they looked to the interior ocean hypothesis and modeled a range of possibilities. The models not only fit Mimas’s orbit well, they also suggest the ocean likely begins 20 to 30 kilometers below the surface. The team believes the ocean would likely be relatively young, somewhere between a few million years old and 25 million years old. The combination of depth and youth could explain why the moon’s surface remains largely undisturbed.

But what accounts for this youth? The team suggests relatively recent gravitational encounters—perhaps with other moons or during the formation of Saturn’s ring system, which some scientists believe to be relatively young also—may have changed the degree of tidal flexing inside Mimas. The associated heat only recently became great enough to melt ice into oceans.

Take Two

It’s a compelling case, but still unproven. Next steps would involve more measurements taken by a future mission. If these measurements match predictions made in the paper, scientists might confirm the ocean’s existence as well as its depth below the surface.

Studying a young, still-evolving interior ocean could give us clues about how older, more stable oceans formed in eons past. And the more liquid water we find in our own solar system, the more likely it’s common through the galaxy. If water worlds—either in the form of planets or moons—are a dime a dozen, what does that say about life?

This is, of course, still one of the biggest questions in science. But each year, thanks to clues gathered in our solar system and beyond, we’re stepping closer to an answer.

Image Credit: NASA/JPL/Space Science Institute

This AI Is Learning to Decode the ‘Language’ of Chickens

0

Have you ever wondered what chickens are talking about? Chickens are quite the communicators—their clucks, squawks, and purrs are not just random sounds but a complex language system. These sounds are their way of interacting with the world and expressing joy, fear, and social cues to one another.

Like humans, the “language” of chickens varies with age, environment, and surprisingly, domestication, giving us insights into their social structures and behaviors. Understanding these vocalizations can transform our approach to poultry farming, enhancing chicken welfare and quality of life.

At Dalhousie University, my colleagues and I are conducting research that uses artificial intelligence to decode the language of chickens. It’s a project that’s set to revolutionize our understanding of these feathered creatures and their communication methods, offering a window into their world that was previously closed to us.

Chicken Translator

The use of AI and machine learning in this endeavor is like having a universal translator for chicken speech. AI can analyze vast amounts of audio data. As our research, yet to be peer-reviewed, is documenting, our algorithms are learning to recognize patterns and nuances in chicken vocalizations. This isn’t a simple task—chickens have a range of sounds that vary in pitch, tone, and context.

But by using advanced data analysis techniques, we’re beginning to crack their code. This breakthrough in animal communication is not just a scientific achievement; it’s a step towards more humane and empathetic treatment of farm animals.

One of the most exciting aspects of this research is understanding the emotional content behind these sounds. Using natural language processing (NLP), a technology often used to decipher human languages, we’re learning to interpret the emotional states of chickens. Are they stressed? Are they content? By understanding their emotional state, we can make more informed decisions about their care and environment.

Non-Verbal Chicken Communication

In addition to vocalizations, our research also delves into non-verbal cues to gauge emotions in chickens. Our research has also explored chickens’ eye blinks and facial temperatures. How these might be reliable indicators of chickens’ emotional states is examined in a preprint (not-yet-peer-reviewed) paper.

By using non-invasive methods like video and thermal imaging, we’ve observed changes in temperature around the eye and head regions, as well as variations in blinking behavior, which appear to be responses to stress. These preliminary findings are opening new avenues in understanding how chickens express their feelings, both behaviorally and physiologically, providing us with additional tools to assess their well-being.

Happier Fowl

This project isn’t just about academic curiosity; it has real-world implications. In the agricultural sector, understanding chicken vocalizations can lead to improved farming practices. Farmers can use this knowledge to create better living conditions, leading to healthier and happier chickens. This, in turn, can impact the quality of produce, animal health, and overall farm efficiency.

The insights gained from this research can also be applied to other areas of animal husbandry, potentially leading to breakthroughs in the way we interact with and care for a variety of farm animals.

But our research goes beyond just farming practices. It has the potential to influence policies on animal welfare and ethical treatment. As we grow to understand these animals better, we’re compelled to advocate for their well-being. This research is reshaping how we view our relationship with animals, emphasizing empathy and understanding.

a man reaches into a chicken coop filled with chicken
Understanding animal communication and behavior can impact animal welfare policies. Image Credit: Unsplash/Zoe Schaeffer

Ethical AI

The ethical use of AI in this context sets a precedent for future technological applications in animal science. We’re demonstrating that technology can and should be used for the betterment of all living beings. It’s a responsibility that we take seriously, ensuring that our advancements in AI are aligned with ethical principles and the welfare of the subjects of our study.

The implications of our research extend to education and conservation efforts as well. By understanding the communication methods of chickens, we gain insights into avian communication in general, providing a unique perspective on the complexity of animal communication systems. This knowledge can be vital for conservationists working to protect bird species and their habitats.

As we continue to make strides in this field, we are opening doors to a new era in animal-human interaction. Our journey into decoding chicken language is more than just an academic pursuit: It’s a step towards a more empathetic and responsible world.

By leveraging AI, we’re not only unlocking the secrets of avian communication but also setting new standards for animal welfare and ethical technological use. It’s an exciting time, as we stand on the cusp of a new understanding between humans and the animal world, all starting with the chicken.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Ben Moreland / Unsplash 

A One-and-Done Injection to Slow Aging? New Study in Mice Opens the Possibility

0

A preventative anti-aging therapy seems like wishful thinking.

Yet a new study led by Dr. Corina Amor Vegas at Cold Spring Harbor Laboratory describes a treatment that brings the dream to life—at least for mice. Given a single injection in young adulthood, they aged more slowly compared to their peers.

By the equivalent of roughly 65 years of age in humans, the mice were slimmer, could better regulate blood sugar and insulin levels, and had lower inflammation and a more youthful metabolic profile. They even kept up their love for running, whereas untreated seniors turned into couch potatoes.

The shot is made up of CAR (chimeric antigen receptor) T cells. These cells are genetically engineered from the body’s T cells—a type of immune cell adept at hunting down particular targets in the body.

CAR T cells first shot to fame as a revolutionary therapy for previously untreatable blood cancers. They’re now close to tackling other medical problems, such as autoimmune disorders, asthma, liver and kidney diseases, and even HIV.

The new study took a page out of CAR T’s cancer-fighting playbook. But instead of targeting cancer cells, they engineered them to hunt down and destroy senescent cells, a type of cell linked to age-related health problems. Often dubbed “zombie cells,” they accumulate with age and pump out a toxic chemical brew that damages surrounding tissues. Zombie cells have been in the crosshairs of longevity researchers and investors alike. Drugs that destroy the cells called senolytics are now a multi-billion-dollar industry.

The new treatment, called senolytic CAR T, also turned back the clock when given to elderly mice. Like humans, the risk of diabetes increases with age in mice. By clearing out zombie cells in multiple organs, the mice could handle sugar rushes without a hitch. Their metabolism improved, and they began jumping around and running like much younger mice.

“If we give it to aged mice, they rejuvenate. If we give it to young mice, they age slower. No other therapy right now can do this,” said Amor Vegas in a press release.

The Walking Dead

Zombie cells aren’t always evil.

They start out as regular cells. As damage to their DNA and internal structures accumulates over time, the body “locks” the cells into a special state called senescence. When young, this process helps prevent cells from turning cancerous by limiting their ability to divide. Although still living, the cells can no longer perform their usual jobs. Instead, they release a complex cocktail of chemicals that alerts the body’s immune system—including T cells—to clear them out. Like spring cleaning, this helps keep the body functioning normally.

With age, however, zombie cells linger. They amp up inflammation, leading to age-related diseases such as cancer, tissue scarring, and blood vessel and heart conditions. Senolytics—drugs that destroy these cells—improve these conditions and increase life span in mice.

But like a pill of Advil, senolytics don’t last long inside the body. To keep zombie cells at bay, repeated doses are likely necessary.

A Perfect Match

Here’s where CAR T cells come in. Back in 2020, Amor Vegas and colleagues designed a “living” senolytic T cell that tracks down and kills zombie cells.

All cells are dotted with protein “beacons” that stick out from their surfaces. Different cell types have unique assortments of these proteins. The team found a protein “beacon” on zombie cells called uPAR. The protein normally occurs at low levels in most organs, but it ramps up in zombie cells, making it a perfect target for senolytic CAR T cells.

In a test, the therapy eliminated senescent cells in mouse models with liver and lung cancers. But surprisingly, the team also found that young mice receiving the treatment had better liver health and metabolism—both of which contribute to age-related diseases.

Can a similar treatment also extend health during aging?

A Living Anti-Aging Drug

The team first injected senolytic CAR T cells into elderly mice aged the equivalent of roughly 65 human years old. Within 20 days, they had lower numbers of zombie cells throughout their bodies, particularly in their livers, fatty tissues, and pancreases. Inflammation levels caused by zombie cells went down, and the mice’s immune profiles reversed to a more youthful state.

In both mice and humans, metabolism tends to go haywire with age. Our ability to handle sugars and insulin decreases, which can lead to diabetes.

With senolytic CAR T therapy, the elderly mice could regulate their blood sugar levels far better than non-treated peers. They also had lower baseline insulin levels after fasting, which rapidly increased when given a sugary treat—a sign of a healthy metabolism.

A potentially dangerous side effect of CAR T is an overzealous immune response. Although the team saw signs of the side effect in young animals at high doses, lowering the amount of the therapy was safe and effective in elderly mice.

Young and Beautiful

Chemical senolytics only last a few hours inside the body. Practically, this means they may need to be consistently taken to keep zombie cells at bay.

CAR T cells, on the other hand, have a far longer lifespan, which can last over 10 years after an initial infusion inside the body. They also “train” the immune system to learn about a new threat—in this case, senescent cells.

“T cells have the ability to develop memory and persist in your body for really long periods, which is very different from a chemical drug,” said Amor Vegas. “With CAR T cells, you have the potential of getting this one treatment, and then that’s it.”

To test how long senolytic CAR T cells can persist in the body, the team infused them into young adult mice and monitored their health as they aged. The engineered cells were dormant until senescent cells began to build up, then they reactivated and readily wiped out the zombie cells.

With just a single shot, the mice aged gracefully. They had lower blood sugar levels, better insulin responses, and were more physically active well into old age.

But mice aren’t people. Their life spans are far shorter than ours. The effects of senolytic CAR T cells may not last as long in our bodies, potentially requiring multiple doses. The treatment can also be dangerous, sometimes triggering a violent immune response that damages organs. Then there’s the cost factor. CAR T therapies are out of reach for most people—a single dose is priced at hundreds of thousands of dollars for cancer treatments.

Despite these problems, the team is cautiously moving forward.

“With CAR T cells, you have the potential of getting this one treatment, and then that’s it,” said Amor Vegas. For chronic age-related diseases, that’s a potential life-changer. “Think about patients who need treatment multiple times per day versus you get an infusion, and then you’re good to go for multiple years.”

Image Credit: Senescent cells (blue) in healthy pancreatic tissue samples from an old mouse treated with CAR T cells as a pup / Cold Spring Harbor Laboratory

This Week’s Awesome Tech Stories From Around the Web (Through February 3)

0

ARTIFICIAL INTELLIGENCE

I Tested a Next-Gen AI Assistant. It Will Blow You Away
Will Knight | Wired
“When the fruits of the recent generative AI boom get properly integrated into…legacy assistant bots [like Siri and Alexa], they will surely get much more interesting. ‘A year from now, I would expect the experience of using a computer to look very different,’ says Shah, who says he built vimGPT in only a few days. ‘Most apps will require less clicking and more chatting, with agents becoming an integral part of browsing the web.'”

BIOTECH

CRISPR Gene Therapy Seems to Cure Dangerous Inflammatory Condition
Clare Wilson | New Scientist
“Ten people who had the one-off gene treatment that is given directly into the body saw their number of ‘swelling attacks’ fall by 95 percent in the first six months as the therapy took effect. Since then, all but one have had no further episodes for at least a further year, while one person who had the lowest dose of the treatment had one mild attack. ‘This is potentially a cure,’ says Padmalal Gurugama at Cambridge University Hospitals in the UK, who worked on the new approach.”

VIRTUAL REALITY

Apple Vision Pro Review: Magic, Until It’s Not
Nilay Patel | The Verge
“The Vision Pro is an astounding product. It’s the sort of first-generation device only Apple can really make, from the incredible display and passthrough engineering, to the use of the whole ecosystem to make it so seamlessly useful, to even getting everyone to pretty much ignore the whole external battery situation. …But the shocking thing is that Apple may have inadvertently revealed that some of these core ideas are actually dead ends—that they can’t ever be executed well enough to become mainstream.”

ARTIFICIAL INTELLIGENCE

Allen Institute for AI Releases ‘Truly Open Source’  LLM to Drive ‘Critical Shift’ in AI Development
Sharon Goldman | VentureBeat
“While other models have included the model code and model weights, OLMo also provides the training code, training data and associated toolkits, as well as evaluation toolkits. In addition, OLMo was released under an open source initiative (OSI) approved license, with AI2 saying that ‘all code, weights, and intermediate checkpoints are released under the Apache 2.0 License.’ The news comes at a moment when open source/open science AI, which has been playing catch-up to closed, proprietary LLMs like OpenAI’s GPT-4 and Anthropic’s Claude, is making significant headway.”

ROBOTICS

This Robot Can Tidy a Room Without Any Help
Rhiannon Williams | MIT Technology Review
“While robots may easily complete tasks like [picking up and moving things] in a laboratory, getting them to work in an unfamiliar environment where there’s little data available is a real challenge. Now, a new system called OK-Robot could train robots to pick up and move objects in settings they haven’t encountered before. It’s an approach that might be able to plug the gap between rapidly improving AI models and actual robot capabilities, as it doesn’t require any additional costly, complex training.”

FUTURE

People Are Worried That AI Will Take Everyone’s Jobs. We’ve Been Here Before.
David Rotman | MIT Technology Review
“[Karl T. Compton’s 1938] essay concisely framed the debate over jobs and technical progress in a way that remains relevant, especially given today’s fears over the impact of artificial intelligence. …While today’s technologies certainly look very different from those of the 1930s, Compton’s article is a worthwhile reminder that worries over the future of jobs are not new and are best addressed by applying an understanding of economics, rather than conjuring up genies and monsters.”

HEALTH

Experimental Drug Cuts Off Pain at the Source, Company Says
Gina Kolata | The New York Times
“Vertex Pharmaceuticals of Boston announced [this week] that it had developed an experimental drug that relieves moderate to severe pain, blocking pain signals before they can get to the brain. It works only on peripheral nerves—those outside the brain and the spinal cord—making it unlike opioids. Vertex says its new drug is expected to avoid opioids’ potential to lead to addiction.”

SPACE

Starlab—With Half the Volume of the ISS—Will Fit Inside Starship’s Payload Bay
Eric Berger | Ars Technica
“‘We looked at multiple launches to get Starlab into orbit, and eventually gravitated toward single launch options,’ [Voyager Space CTO Marshall Smith] said. ‘It saves a lot of the cost of development. It saves a lot of the cost of integration. We can get it all built and checked out on the ground, and tested and launch it with payloads and other systems. One of the many lessons we learned from the International Space Station is that building and integrating in space is very expensive.’ With a single launch on a Starship, the Starlab module should be ready for human habitation almost immediately, Smith said.”

FUTURE

9 Retrofuturistic Predictions That Came True
Maxwell Zeff | Gizmodo
“Commentators and reporters annually try to predict where technology will go, but many fail to get it right year after year. Who gets it right? More often than not, the world resembles the pop culture of the past’s vision for the future. Looking to retrofuturism, an old version of the future, can often predict where our advanced society will go.”

TECH

Can This AI-Powered Search Engine Replace Google? It Has for Me.
Kevin Roose | The New York Times
“Intrigued by the hype, I recently spent several weeks using Perplexity as my default search engine on both desktop and mobile. …Hundreds of searches later, I can report that even though Perplexity isn’t perfect, it’s very good. And while I’m not ready to break up with Google entirely, I’m now more convinced that AI-powered search engines like Perplexity could loosen Google’s grip on the search market, or at least force it to play catch-up.”

Image Credit: Dulcey Lima / Unsplash

These Technologies Could Axe 85% of CO2 Emissions From Heavy Industry

0

Heavy industry is one of the most stubbornly difficult areas of the economy to decarbonize. But new research suggests emissions could be reduced by up to 85 percent globally using a mixture of tried-and-tested and upcoming technologies.

While much of the climate debate focuses on areas like electricity, vehicle emissions, and aviation, a huge fraction of carbon emissions comes from hidden industrial processes. In 2022, the sector—which includes things like chemicals, iron and steel, and cement—accounted for a quarter of the world’s emissions, according to the International Energy Agency.

While they are often lumped together, these industries are very different, and the sources of their emissions can be highly varied. That means there’s no silver bullet and explains why the sector has proven to be one of the most challenging to decarbonize.

This prompted researchers from the UK to carry out a comprehensive survey of technologies that could help get the sector’s emissions under control. They found that solutions like carbon capture and storage, switching to hydrogen or biomass fuels, or electrification of key industrial processes could cut out the bulk of the heavy industry carbon footprint.

“Our findings represent a major step forward in helping to design industrial decarbonization strategies and that is a really encouraging prospect when it comes to the future health of the planet,”  Dr. Ahmed Gailani, from Leeds University, said in a press release.

The researchers analyzed sectors including iron and steel, chemicals, cement and lime, food and drink, pulp and paper, glass, aluminum, refining, and ceramics. They carried out an extensive survey of all the emissions-reducing technologies that had been proposed for each industry, both those that are well-established and emerging ones.

Across all sectors, they identified four key approaches that could help slash greenhouse gases—switching to low-carbon energy supplies like green hydrogen, renewable electricity, or biomass; using carbon capture and storage to mitigate emissions; modifying or replacing emissions-heavy industrial processes; and using less energy and raw materials to produce a product.

Electrification will likely be an important approach across a range of sectors, the authors found. In industries requiring moderate amounts of heat, natural gas boilers and ovens could be replaced with electric ones. Novel technologies like electric arc furnaces and electric steam crackers could help decarbonize the steel and chemicals industries, respectively, though these technologies are still immature.

Green hydrogen could also play a broad role, both as a fuel for heating and an ingredient in various industrial processes that currently rely on hydrogen derived from fossil fuels. Biomass similarly can be used for heating but could also provide more renewable feedstocks for plastic production.

Some industries, such as cement and chemicals, are particularly hard to tackle because carbon dioxide is produced directly by industrial processes rather than as a byproduct of energy needs. For these sectors, carbon capture and storage will likely be particularly important, say the authors.

In addition, they highlight a range of industry-specific alternative production routes that could make a major dent in emissions. Altogether, they estimate these technologies could slash the average emissions of heavy industry by up to 85 percent compared to the baseline.

It’s important to note that the research, which was reported in Joule, only analyzes the technical feasibility of these approaches. The team did not look into the economics or whether the necessary infrastructure was in place, which could have a big impact on how much of a difference they could really make.

“There are of course many other barriers to overcome,” said Gailani. “For example, if carbon capture and storage technologies are needed but the means to transport CO2 are not yet in place, this lack of infrastructure will delay the emissions reduction process. There is still a great amount of work to be done.”

Nonetheless, the research is the first comprehensive survey of what’s possible when it comes to decarbonizing industry. While bringing these ideas to fruition may take a lot of work, the study shows getting emissions from these sectors under control is entirely possible.

Image Credit: Marek Piwnicki / Unsplash

An AI Just Learned Language Through the Eyes and Ears of a Toddler

0

Sam was six months old when he first strapped a lightweight camera onto his forehead.

For the next year and a half, the camera captured snippets of his life. He crawled around the family’s pets, watched his parents cook, and cried on the front porch with grandma. All the while, the camera recorded everything he heard.

What sounds like a cute toddler home video is actually a daring concept: Can AI learn language like a child? The results could also reveal how children rapidly acquire language and concepts at an early age.

A new study in Science describes how researchers used Sam’s recordings to train an AI to understand language. With just a tiny portion of one child’s life experience over a year, the AI was able to grasp basic concepts—for example, a ball, a butterfly, or a bucket.

The AI, called Child’s View for Contrastive Learning (CVCL), roughly mimics how we learn as toddlers by matching sight to audio. It’s a very different approach than that taken by large language models like the ones behind ChatGPT or Bard. These models’ uncanny ability to craft essays, poetry, or even podcast scripts has thrilled the world. But they need to digest trillions of words from a wide variety of news articles, screenplays, and books to develop these skills.

Kids, by contrast, learn with far less input and rapidly generalize their learnings as they grow. Scientists have long wondered if AI can capture these abilities with everyday experiences alone.

“We show, for the first time, that a neural network trained on this developmentally realistic input from a single child can learn to link words to their visual counterparts,” study author Dr. Wai Keen Vong at NYU’s Center for Data Science said in a press release about the research.

Child’s Play

Children easily soak up words and their meanings from everyday experience.

At just six months old, they begin to connect words to what they’re seeing—for example, a round bouncy thing is a “ball.” By two years of age, they know roughly 300 words and their concepts.

Scientists have long debated how this happens. One theory says kids learn to match what they’re seeing to what they’re hearing. Another suggests language learning requires a broader experience of the world, such as social interaction and the ability to reason.

It’s hard to tease these ideas apart with traditional cognitive tests in toddlers. But we may get an answer by training an AI through the eyes and ears of a child.

M3GAN?

The new study tapped a rich video resource called SAYCam, which includes data collected from three kids between 6 and 32 months old using GoPro-like cameras strapped to their foreheads.

Twice every week, the cameras recorded around an hour of footage and audio as they nursed, crawled, and played. All audible dialogue was transcribed into “utterances”—words or sentences spoken before the speaker or conversation changes. The result is a wealth of multimedia data from the perspective of babies and toddlers.

For the new system, the team designed two neural networks with a “judge” to coordinate them. One translated first-person visuals into the whos and whats of a scene—is it a mom cooking? The other deciphered words and meanings from the audio recordings.

The two systems were then correlated in time so the AI learned to associate correct visuals with words. For example, the AI learned to match an image of a baby to the words “Look, there’s a baby” or an image of a yoga ball to “Wow, that is a big ball.” With training, it gradually learned to separate the concept of a yoga ball from a baby.

“This provides the model a clue as to which words should be associated with which objects,” said Vong.

The team then trained the AI on videos from roughly a year and a half of Sam’s life. Together, it amounted to over 600,000 video frames, paired with 37,500 transcribed utterances. Although the numbers sound large, they’re roughly just one percent of Sam’s daily waking life and peanuts compared to the amount of data used to train large language models.

Baby AI on the Rise

To test the system, the team adapted a common cognitive test used to measure children’s language abilities. They showed the AI four new images—a cat, a crib, a ball, and a lawn—and asked which one was the ball.

Overall, the AI picked the correct image around 62 percent of the time. The performance nearly matched a state-of-the-art algorithm trained on 400 million image and text pairs from the web—orders of magnitude more data than that used to train the AI in the study. They found that linking video images with audio was crucial. When the team shuffled video frames and their associated utterances, the model completely broke down.

The AI could also “think” outside the box and generalize to new situations.

In another test, it was trained on Sam’s perspective of a picture book as his parent said, “It’s a duck and a butterfly.” Later, he held up a toy butterfly when asked, “Can you do the butterfly?” When challenged with multicolored butterfly images—ones the AI had never seen before—it detected three out of four examples for “butterfly” with above 80 percent accuracy.

Not all word concepts scored the same. For instance, “spoon” was a struggle. But it’s worth pointing out that, like a tough reCAPTCHA, the training images were hard to decipher even for a human.

Growing Pains

The AI builds on recent advances in multimodal machine learning, which combines text, images, audio, or video to train a machine brain.

With input from just a single child’s experience, the algorithm was able to capture how words relate to each other and link words to images and concepts. It suggests that for toddlers hearing words and matching them to what they’re seeing helps build their vocabulary.

That’s not to say other brain processes, such as social cues and reasoning don’t come into play. Adding these components to the algorithm could potentially improve it, the authors wrote.

The team plans to continue the experiment. For now, the “baby” AI only learns from still image frames and has a vocabulary mostly comprised of nouns. Integrating video segments into the training could help the AI learn verbs because video includes movement.

Adding intonation to speech data could also help. Children learn early on that a mom’s “hmm” can have vastly different meanings depending on the tone.

But overall, combining AI and life experiences is a powerful new method to study both machine and human brains. It could help us develop new AI models that learn like children, and potentially reshape our understanding of how our brains learn language and concepts.

Image Credit: Wai Keen Vong

The First 3D Printer to Use Molten Metal in Space Is Headed to the ISS This Week

0

The Apollo 13 moon mission didn’t go as planned. After an explosion blew off part of the spacecraft, the astronauts spent a harrowing few days trying to get home. At one point, to keep the air breathable, the crew had to cobble together a converter for ill-fitting CO2 scrubbers with duct tape, space suit parts, and pages from a mission manual.

They didn’t make it to the moon, but Apollo 13 was a master class in hacking. It was also a grim reminder of just how alone astronauts are from the moment their spacecraft lifts off. There are no hardware stores in space (yet). So what fancy new tools will the next generation of space hackers use? The first 3D printer to make plastic parts arrived at the ISS a decade ago. This week, astronauts will take delivery of the first metal 3D printer. The machine should arrive at the ISS Thursday as part of the Cygnus NG-20 resupply mission.

The first 3D printer to print metal in space, pictured here, is headed to the ISS. Image Credit: ESA

Built by an Airbus-led team, the printer is about the size of a washing machine—small for metal 3D printers but big for space exploration—and uses high-powered lasers to liquefy metal alloys at temperatures of over 1,200 degrees Celsius (2,192 degrees Fahrenheit). Molten metal is deposited in layers to steadily build small (but hopefully useful) objects, like spare parts or tools.

Astronauts will install the 3D printer in the Columbus Laboratory on the ISS, where the team will conduct four test prints. They then plan to bring these objects home and compare their strength and integrity to prints completed under Earth gravity. They also hope the experiment demonstrates the process—which involves much higher temperatures than prior 3D printers and harmful fumes—is safe.

“The metal 3D printer will bring new on-orbit manufacturing capabilities, including the possibility to produce load-bearing structural parts that are more resilient than a plastic equivalent,” Gwenaëlle Aridon, a lead engineer at Airbus said in a press release. “Astronauts will be able to directly manufacture tools such as wrenches or mounting interfaces that could connect several parts together. The flexibility and rapid availability of 3D printing will greatly improve astronauts’ autonomy.”

One of four test prints planned for the ISS mission. Image Credit: Airbus Space and Defence SAS

Taking nearly two days per print job, the machine is hardly a speed demon, and the printed objects will be rough around the edges. Following the first demonstration of partial-gravity 3D printing on the ISS, the development of technologies suitable for orbital manufacturing has been slow. But as the ISS nears the end of its life and private space station and other infrastructure projects ramp up, the technology could find more uses.

The need to manufacture items on-demand will only grow the further we travel from home and the longer we stay there. The ISS is relatively nearby—a mere 200 miles overhead—but astronauts exploring and building a more permanent presence on the moon or Mars will need to repair and replace anything that breaks on their mission.

Ambitiously, and even further out, metal 3D printing could contribute to ESA’s vision of a “circular space economy,” in which material from old satellites, spent rocket stages, and other infrastructure is recycled into new structures, tools, and parts as needed.

Duct tape will no doubt always have a place in every space hacker’s box of tools—but a few 3D printers to whip up plastic and metal parts on the fly certainly won’t hurt the cause.

Image Credit: NASA

How Much Life Has Ever Existed on Earth, and How Much Ever Will?

0

All organisms are made of living cells. While it is difficult to pinpoint exactly when the first cells came to exist, geologists’ best estimates suggest at least as early as 3.8 billion years ago. But how much life has inhabited this planet since the first cell on Earth? And how much life will ever exist on Earth?

In our new study, published in Current Biology, my colleagues from the Weizmann Institute of Science and Smith College and I took aim at these big questions.

Carbon on Earth

Every year, about 200 billion tons of carbon is taken up through what is known as primary production. During primary production, inorganic carbon—such as carbon dioxide in the atmosphere and bicarbonate in the ocean—is used for energy and to build the organic molecules life needs.

Today, the most notable contributor to this effort is oxygenic photosynthesis, where sunlight and water are key ingredients. However, deciphering past rates of primary production has been a challenging task. In lieu of a time machine, scientists like myself rely on clues left in ancient sedimentary rocks to reconstruct past environments.

In the case of primary production, the isotopic composition of oxygen in the form of sulfate in ancient salt deposits allows for such estimates to be made.

In our study, we compiled all previous estimates of ancient primary production derived through the method above, as well as many others. The outcome of this productivity census was that we were able to estimate that 100 quintillion (or 100 billion billion) tons of carbon have been through primary production since the origin of life.

Big numbers like this are difficult to picture; 100 quintillion tons of carbon is about 100 times the amount of carbon contained within the Earth, a pretty impressive feat for Earth’s primary producers.

Primary Production

Today, primary production is mainly achieved by plants on land and marine micro-organisms such as algae and cyanobacteria. In the past, the proportion of these major contributors was very different; in the case of Earth’s earliest history, primary production was mainly conducted by an entirely different group of organisms that doesn’t rely on oxygenic photosynthesis to stay alive.

A combination of different techniques has been able to give a sense of when different primary producers were most active in Earth’s past. Examples of such techniques include identifying the oldest forests or using molecular fossils called biomarkers.

In our study, we used this information to explore what organisms have contributed the most to Earth’s historical primary production. We found that despite being late on the scene, land plants have likely contributed the most. However, it is also very plausible that cyanobacteria contributed the most.

green hair-like strands of bacteria
Filamentous cyanobacteria from a tidal pond at Little Sippewissett salt marsh, Falmouth, Mass. Image Credit: Argonne National Laboratory, CC BY-NC-SA

Total Life

By determining how much primary production has ever occurred, and by identifying what organisms have been responsible for it, we were also able to estimate how much life has ever been on Earth.

Today, one may be able to approximate how many humans exist based on how much food is consumed. Similarly, we were able to calibrate a ratio of primary production to how many cells exist in the modern environment.

Despite the large variability in the number of cells per organism and the sizes of different cells, such complications become secondary since single-celled microbes dominate global cell populations. In the end, we were able to estimate that about 1030 (10 noninillion) cells exist today, and between 1039 (a duodecillion) and 1040 cells have ever existed on Earth.

How Much Life Will Earth Ever Have?

Save for the ability to move Earth into the orbit of a younger star, the lifetime of Earth’s biosphere is limited. This morbid fact is a consequence of our star’s life cycle. Since its birth, the sun has slowly been getting brighter over the past four and half billion years as hydrogen has been converted to helium in its core.

Far in the future, about two billion years from now, all of the biogeochemical fail-safes that keep Earth habitable will be pushed past their limits. First, land plants will die off, and then eventually the oceans will boil, and the Earth will return to a largely lifeless rocky planet as it was in its infancy.

But until then, how much life will Earth house over its entire habitable lifetime? Projecting our current levels of primary productivity forward, we estimated that about 1040 cells will ever occupy the Earth.

a blue planet in space
A planetary system 100 light-years away in the constellation Dorado is home to the first Earth-size habitable-zone planet, discovered by NASA’s Transiting Exoplanet Survey Satellite. Image Credit: NASA Goddard Space Flight Center

Earth as an Exoplanet

Only a few decades ago, exoplanets (planets orbiting other stars) were just a hypothesis. Now we are able to not only detect them, but describe many aspects of thousands of far off worlds around distant stars.

But how does Earth compare to these bodies? In our new study, we have taken a birds eye view of life on Earth and have put forward Earth as a benchmark to compare other planets.

What I find truly interesting, however, is what could have happened in Earth’s past to produce a radically different trajectory and therefore a radically different amount of life that has been able to call Earth home. For example, what if oxygenic photosynthesis never took hold, or what if endosymbiosis never happened?

Answers to such questions are what will drive my laboratory at Carleton University over the coming years.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Mihály Köles / Unsplash 

AI Can Design Totally New Proteins From Scratch—It’s Time to Talk Biosecurity

0

Two decades ago, engineering designer proteins was a dream.

Now, thanks to AI, custom proteins are a dime a dozen. Made-to-order proteins often have specific shapes or components that give them abilities new to nature. From longer-lasting drugs and protein-based vaccines, to greener biofuels and plastic-eating proteins, the field is rapidly becoming a transformative technology.

Custom protein design depends on deep learning techniques. With large language models—the AI behind OpenAI’s blockbuster ChatGPT—dreaming up millions of structures beyond human imagination, the library of bioactive designer proteins is set to rapidly expand.

“It’s hugely empowering,” Dr. Neil King at the University of Washington recently told Nature. “Things that were impossible a year and a half ago—now you just do it.”

Yet with great power comes great responsibility. As newly designed proteins increasingly gain traction for use in medicine and bioengineering, scientists are now wondering: What happens if these technologies are used for nefarious purposes?

A recent essay in Science highlights the need for biosecurity for designer proteins. Similar to ongoing conversations about AI safety, the authors say it’s time to consider biosecurity risks and policies so custom proteins don’t go rogue.

The essay is penned by two experts in the field. One, Dr. David Baker, the director of the Institute for Protein Design at the University of Washington, led the development of RoseTTAFold—an algorithm that cracked the half-decade problem of decoding protein structure from its amino acid sequences alone. The other, Dr. George Church at Harvard Medical School, is a pioneer in genetic engineering and synthetic biology.

They suggest synthetic proteins need barcodes embedded into each new protein’s genetic sequence. If any of the designer proteins becomes a threat—say, potentially triggering a dangerous outbreak—its barcode would make it easy to trace back to its origin.

The system basically provides “an audit trail,” the duo write.

Worlds Collide

Designer proteins are inextricably tied to AI. So are potential biosecurity policies.

Over a decade ago, Baker’s lab used software to design and build a protein dubbed Top7. Proteins are made of building blocks called amino acids, each of which is encoded inside our DNA. Like beads on a string, amino acids are then twirled and wrinkled into specific 3D shapes, which often further mesh into sophisticated architectures that support the protein’s function.

Top7 couldn’t “talk” to natural cell components—it didn’t have any biological effects. But even then, the team concluded that designing new proteins makes it possible to explore “the large regions of the protein universe not yet observed in nature.”

Enter AI. Multiple strategies recently took off to design new proteins at supersonic speeds compared to traditional lab work.

One is structure-based AI similar to image-generating tools like DALL-E. These AI systems are trained on noisy data and learn to remove the noise to find realistic protein structures. Called diffusion models, they gradually learn protein structures that are compatible with biology.

Another strategy relies on large language models. Like ChatGPT, the algorithms rapidly find connections between protein “words” and distill these connections into a sort of biological grammar. The protein strands these models generate are likely to fold into structures the body can decipher. One example is ProtGPT2, which can engineer active proteins with shapes that could lead to new properties.

Digital to Physical

These AI protein-design programs are raising alarm bells. Proteins are the building blocks of life—changes could dramatically alter how cells respond to drugs, viruses, or other pathogens.

Last year, governments around the world announced plans to oversee AI safety. The technology wasn’t positioned as a threat. Instead, the legislators cautiously fleshed out policies that ensure research follows privacy laws and bolsters the economy, public health, and national defense. Leading the charge, the European Union agreed on the AI Act to limit the technology in certain domains.

Synthetic proteins weren’t directly called out in the regulations. That’s great news for making designer proteins, which could be kneecapped by overly restrictive regulation, write Baker and Church. However, new AI legislation is in the works, with the United Nation’s advisory body on AI set to share guidelines on international regulation in the middle of this year.

Because the AI systems used to make designer proteins are highly specialized, they may still fly under regulatory radars—if the field unites in a global effort to self-regulate.

At the 2023 AI Safety Summit, which did discuss AI-enabled protein design, experts agreed documenting each new protein’s underlying DNA is key. Like their natural counterparts, designer proteins are also built from genetic code. Logging all synthetic DNA sequences in a database could make it easier to spot red flags for potentially harmful designs—for example, if a new protein has structures similar to known pathogenic ones.

Biosecurity doesn’t squash data sharing. Collaboration is critical for science, but the authors acknowledge it’s still necessary to protect trade secrets. And like in AI, some designer proteins may be potentially useful but too dangerous to share openly.

One way around this conundrum is to directly add safety measures to the process of synthesis itself. For example, the authors suggest adding a barcode—made of random DNA letters—to each new genetic sequence. To build the protein, a synthesis machine searches its DNA sequence, and only when it finds the code will it begin to build the protein.

In other words, the original designers of the protein can choose who to share the synthesis with—or whether to share it at all—while still being able to describe their results in publications.

A barcode strategy that ties making new proteins to a synthesis machine would also amp up security and deter bad actors, making it difficult to recreate potentially dangerous products.

“If a new biological threat emerges anywhere in the world, the associated DNA sequences could be traced to their origins,” the authors wrote.

It will be a tough road. Designer protein safety will depend on global support from scientists, research institutions, and governments, the authors write. However, there have been previous successes. Global groups have established safety and sharing guidelines in other controversial fields, such as stem cell research, genetic engineering, brain implants, and AI. Although not always followed—CRISPR babies are a notorious example—for the most part these international guidelines have helped move cutting-edge research forward in a safe and equitable manner.

To Baker and Church, open discussions about biosecurity will not slow the field. Rather, it can rally different sectors and engage public discussion so custom protein design can further thrive.

Image Credit: University of Washington

This Week’s Awesome Tech Stories From Around the Web (Through January 27)

0

ARTIFICIAL INTELLIGENCE

New Theory Suggests Chatbots Can Understand Text
Anil Ananthaswamy | Quanta
“Artificial intelligence seems more powerful than ever, with chatbots like Bard and ChatGPT capable of producing uncannily humanlike text. But for all their talents, these bots still leave researchers wondering: Do such models actually understand what they are saying? ‘Clearly, some people believe they do,’ said the AI pioneer Geoff Hinton in a recent conversation with Andrew Ng, ‘and some people believe they are just stochastic parrots.’ …New research may have intimations of an answer.”

FUTURE

Etching AI Controls Into Silicon Could Keep Doomsday at Bay
Will Knight | Wired
“Even the cleverest, most cunning artificial intelligence algorithm will presumably have to obey the laws of silicon. Its capabilities will be constrained by the hardware that it’s running on. Some researchers are exploring ways to exploit that connection to limit the potential of AI systems to cause harm. The idea is to encode rules governing the training and deployment of advanced algorithms directly into the computer chips needed to run them.”

TECH

Google’s Hugging Face Deal Puts ‘Supercomputer’ Power Behind Open-Source AI
Emilia David | The Verge
Google Cloud’s new partnership with AI model repository Hugging Face is letting developers build, train, and deploy AI models without needing to pay for a Google Cloud subscription. Now, outside developers using Hugging Face’s platform will have ‘cost-effective’ access to Google’s tensor processing units (TPU) and GPU supercomputers, which will include thousands of Nvidia’s in-demand and export-restricted H100s.

INNOVATION

How Microsoft Catapulted to $3 Trillion on the Back of AI
Tom Dotan | The Wall Street Journal
“Microsoft on Thursday became the second company ever to end the trading day valued at more than $3 trillion, a milestone reflecting investor optimism that one of the oldest tech companies is leading an artificial-intelligence revolution. …One of [CEO Satya Nadella’s] biggest gambles in recent years has been partnering with an untested nonprofit startup—generative AI pioneer OpenAI—and quickly folding its technology into Microsoft’s bestselling products. That move made Microsoft a de facto leader in a burgeoning AI field many believe will retool the tech industry.”

SPACE

Hell Yeah, We’re Getting a Space-Based Gravitational Wave Observatory
Isaac Schultz | Gizmodo
“To put an interferometer in space would vastly reduce the noise encountered by ground-based instruments, and lengthening the arms of the observatory would allow scientists to collect data that is imperceptible on Earth. ‘Thanks to the huge distance traveled by the laser signals on LISA, and the superb stability of its instrumentation, we will probe gravitational waves of lower frequencies than is possible on Earth, uncovering events of a different scale, all the way back to the dawn of time,’ said Nora Lützgendorf, the lead project scientist for LISA, in an ESA release.”

ROBOTICS

General Purpose Humanoid Robots? Bill Gates Is a Believer
Brian Heater | TechCrunch
“The robotics industry loves a good, healthy debate. Of late, one of the most intense ones centers around humanoid robots. It’s been a big topic for decades, of course, but the recent proliferation of startups like 1X and Figure—along with projects from more established companies like Tesla—have put humanoids back in the spotlight. Humanoid robots can, however, now claim a big tech name among their ranks. Bill Gates this week issued a list of ‘cutting-edge robotics startups and labs that I’m excited about.’ Among the names are three companies focused on developing humanoids.”

CRYPTOCURRENCY

Is Cryptocurrency Like Stocks and Bonds? Courts Move Closer to an Answer.
Matthew Goldstein and David Yaffe-Bellany | The New York Times
“How the courts rule could determine whether the crypto industry can burrow deeper into the American financial system. If the SEC prevails, crypto supporters say, it will stifle the growth of a new and dynamic technology, pushing start-ups to move offshore. The government has countered that robust oversight is necessary to end the rampant fraud that cost investors billions of dollars when the crypto market imploded in 2022.”

ENERGY

Solid-State EV Batteries Now Face ‘Production Hell’
Charles J. Murray | IEEE Spectrum
“Producing battery packs that yield 800+ kilometers remains rough going. …’Solid-state is a great technology,’ noted Bob Galyen, owner of Galyen Energy LLC and former chief technology officer for the Chinese battery giant, Contemporary Amperex Technology Ltd (CATL). ‘But it’s going to be just like lithium-ion was in terms of the length of time it will take to hit the market. And lithium-ion took a long time to get there.'”

TECH

I Love My GPT, But I Can’t Find a Use for Anybody Else’s
Emilia David | The Verge
Though I’ve come to depend on my GPT, it’s the only one I use. It’s not fully integrated into my workflow either, because GPTs live in the ChatGPT Plus tab on my browser instead of inside a program like Google Docs. And honestly, if I wasn’t already paying for ChatGPT Plus, I’d be happy to keep Googling alternative terms. I don’t think I’ll be giving up ‘What’s Another Word For’ any time soon, but unless another hot GPT idea strikes me, I’m still not sure what they’re good for—at least in my job.

Image Credit: Jonny Caspari / Unsplash

These Engineered Muscle Cells Could Slash the Cost of Lab-Grown Meat

0

Lab-grown meat could present a kinder and potentially greener alternative to current livestock farming. New specially engineered meat cells could finally bring costs down to a practical level.

While the idea of growing meat in the lab rather than the field would have sounded like sci-fi a decade ago, today there are a bevy of startups vying to bring so-called “cultivated meat” to everyday shops and restaurants.

The big sell is that the technology will allow us to enjoy meat without having to worry about the murky ethics of industrial-scale animal agriculture. There are also more contentious claims that producing meat this way will significantly reduce its impact on the environment.

Both points are likely to appeal to increasingly conscientious consumers. The kicker is that producing meat in a lab currently costs far more than conventional farming, which means that so far these products have only appeared in high-end restaurants.

New research from Tufts University could help change that. The researchers have engineered cow muscle cells to produce one of cultivated meat’s most expensive ingredients by themselves, potentially slashing production costs.

“Products have already been awarded regulatory approval for consumption in the US and globally, although costs and availability remain limiting,” says David Kaplan, from Tufts, who led the research. “I think advances like this will bring us much closer to seeing affordable cultivated meat in our local supermarkets within the next few years.”

The ingredient in question is known as growth factor—a kind of signaling protein that stimulates cells to grow and differentiate into other cell types. When growing cells outside the body these proteins need to be introduced artificially to the medium the culture is growing in to get the cells to proliferate.

But growth factors are extremely expensive and must be sourced by specialist industrial suppliers that normally cater to researchers and the drug industry. The authors say that these ingredients can account for as much as 90 percent of the cost of cultured meat production.

So, they decided to genetically engineer cow muscle cells—the key ingredient in cultivated beef—to produce growth factors themselves, removing the need to include them in the growth media. In a paper in Cell Reports Sustainability, they describe how they managed to get the cells to produce fibroblast growth factor (FGF), one of the most critical of these important signaling proteins and a significant contributor to the cost of a cultured meat medium the authors included in the study.

Crucially, the researchers did this by editing native genes and dialing their expression up and down, rather than introducing foreign genetic material. That will be important for ultimate regulatory approval, says Andrew Stout, who helped lead the project, because rules are more stringent when genes are transplanted from one species to another.

The approach will still require some work before it’s ready for commercial use, however. The researchers report the engineered cells did grow in the absence of external FGF but at a slower rate. They expect to overcome this by tweaking the timing or levels of FGF production.

And although it’s one of the costliest, FGF isn’t the only growth factor required for lab-grown meat. Whether similar approaches could also cut other growth factors out of the ingredient list remains to be seen.

These products face barriers that go beyond cost as well. Most products so far have focused on things like burgers and chicken nuggets that are made of ground meat. That’s because the complex distribution of tissues like fat, bone, and sinew that you might find in a steak or a fillet of fish are incredibly tough to recreate in the lab.

But if approaches like this one can start to bring the cost of lab-grown meat down to competitive levels, consumers may be willing to trade a little bit of taste and texture for a clear conscience.

Image Credit: Screenroad / Unsplash

A Child Born Deaf Can Hear for the First Time Thanks to Pioneering Gene Therapy

0

When Aissam Dam had the strange device connected to his ear, he had no idea it was going to change his life.

An 11-year-old boy, Aissam was born deaf due to a single gene mutation. In October 2023, he became the first person in the US to receive a gene therapy that added a healthy version of the mutated gene into his inner ear. Within four weeks, he began to hear sounds.

Four months later, his perception of the world had broadened beyond imagination. For the first time, he heard the buzzing of traffic, learned the timbre of his father’s voice, and wondered at the snipping sound scissors made during a haircut.

Aissam is participating in an ongoing clinical trial testing a one-time gene therapy to restore hearing in kids like him. Due to a mutation in a gene called otoferlin, the children are born deaf and often require hearing aids from birth. The trial is a collaboration between the Children’s Hospital of Philadelphia and Akouos, a subsidiary of the pharmaceutical giant Eli Lilly.

“Gene therapy for hearing loss is something physicians and scientists around the world have been working toward for over 20 years,” said Dr. John Germiller at the Children’s Hospital of Philadelphia, who administered the drug to Aissam, in a press release. “These initial results show that it may restore hearing better than many thought possible.”

While previously tested in mice and non-human primates, the team didn’t know if the therapy would work for Aissam. Even if it did work, they were unsure how it would affect the life of a deaf young adult—essentially introducing him to an entirely new sensory world.

They didn’t have to worry. “There’s no sound I don’t like…they’re all good,” Aissam said to the New York Times.

A Broken Bridge

Hearing isn’t just about picking up sounds, it’s also about translating sound waves into electrical signals our brains can perceive and understand.

At the core of this process is the cochlea, a snail-like structure buried deep inside the inner ear that translates sound waves into electrical signals that are then sent to the brain.

The cochlea is a bit like a roll-up piano keyboard. The structure is lined with over 3,500 wiggly, finger-shaped hairs. Like individual piano keys, each hair cell is tuned to a note. The cells respond when they detect their preferred sound frequency, sending electrical pulses to the auditory parts of the brain. This allows us to perceive sounds, conversations, and music.

For Aissam and over 200,000 people worldwide, these hair cells are unable to communicate with the brain from birth due a mutation in a gene called otoferlin. Otoferlin is a bridge. It enables the hair cells lining the cochlea to send chemical messages to nearby nerve fibers, activating signals to the brain. The mutated gene cuts the phone line, leading to deafness.

Hearing Helper

In the clinical trial, scientists hoped to restore the connection between inner-ear cells and the brain using a gene therapy to add a dose of otoferlin directly into the inner ear.

This was not straightforward. Otoferlin is a very large gene, making it difficult to directly inject into the body. In the new trial, the team cleverly broke the gene into two chunks. Each chunk was inserted into a safe viral carrier and shuttled into the hair cells. Once inside the body, the inner-ear cells stitched the two parts back into a working otoferlin gene.

Developing therapies for the inner ear is delicate work. The organ uses a matrix of tissues and liquids to detect different notes and tones. Tweaks can easily alter our perception of sound.

Here, the team carefully engineered a device to inject the therapy into a small liquid-filled nook in the cochlea. From there, the liquid gene therapy could float down the entire length of the cochlea, bathing every inner hair in the treatment.

In mice, the treatment amped up otoferlin levels. In a month, the critters were able to hear with minimal side effects. Another test in non-human primates found similar effects. The therapy slightly altered liver and spleen functions, but its main effects were in the inner ear.

A major hiccup in treating the inner ear is pressure. You’ve likely experienced this—a quick ascent on a flight or a deep dive into the ocean makes the ears pop. Injecting liquids into the inner ear can similarly disrupt things. The team carefully scaled the dose of the treatment in mice and non-human primates and made a tiny vent so the therapy could reach the whole cochlea.

Assessing non-human primates a month after treatment, the team didn’t detect signs of the gene therapy in their blood, saliva, or nasal swab samples—confirming the treatment was tailored to the inner ear as hoped and, potentially, had minimal side effects.

A Path Forward

The trial is one of five gene therapy studies tackling inherited deafness.

In October last year, a team in China gave five children with otoferlin genetic defects a healthy version of the gene. In a few months, a six-year-old girl, Yiyi, was able to hear sounds at roughly the volume of a whisper, according to MIT Technology Review.

The gene therapy isn’t for everyone with hearing loss. Otoferlin mutations make up about three percent of cases of inherited deafness. Most children with the mutation don’t completely lose their hearing and are given cochlear implants to compensate at an early age. It’s still unclear if the treatment also helps improve their hearing. However, a similar strategy could potentially be used for others with genetic hearing disorders.

For Yiyi and Aissam, who never had cochlear implants, the gene therapy is a life-changer. Sounds were terrifying at first. Yiyi heard traffic noises as she slept at night for the first time, saying it’s “too noisy.” Aissam is still learning to incorporate the new experience into his everyday life—a bit like learning a new superpower. His favorite sounds? “People,” he said through sign language.

Image Credit: tung256Pixabay

Dreams May Have Played a Crucial Role in Our Evolutionary Success as a Species

0

Have you ever woken from a dream, emotionally laden with anxiety, fear, or a sense of unpreparedness? Typically, these kinds of dreams are associated with content like losing one’s voice, teeth falling out, or being chased by a threatening being.

But one question I’ve always been interested in is whether or not these kinds of dreams are experienced globally across many cultures. And if some features of dreaming are universal, could they have enhanced the likelihood of our ancestors surviving the evolutionary game of life?

My research focuses on the distinctive characteristics that make humans the most successful species on Earth. I’ve explored the question of human uniqueness by comparing Homo sapiens with various animals, including chimpanzees, gorillas, orangutans, lemurs, wolves, and dogs. Recently, I’ve been part of a team of collaborators that has focused our energies on working with small-scale societies known as hunter-gatherers.

We wanted to explore how the content and emotional function of dreams might vary across different cultural contexts. By comparing dreams from forager communities in Africa to those from Western societies, we wanted to understand how cultural and environmental factors shape the way people dream.

Comparative Dream Research

As part of this research, published in Nature Scientific Reports, my colleagues and I worked closely for several months with the BaYaka in the Democratic Republic of Congo and the Hadza in Tanzania to record their dreams. For Western dreamers, we recorded dream journals and detailed dream accounts, collected between 2014 and 2022, from people living in Switzerland, Belgium, and Canada.

The Hadza of Tanzania and the BaYaka of Congo fill a crucial, underexplored gap for dream research due to their distinct lifestyle. Their egalitarian culture, emphasizing equality and cooperation, is vital for survival, social cohesion, and well-being. These forager communities rely heavily on supportive relationships and communal sharing of resources.

Higher mortality rates due to disease, intergroup conflict, and challenging physical environments in these communities (without the kind of social safety nets common to post-industrial societies in the West) means they rely on face-to-face relationships for survival in a way that is a distinct feature of forager life.

Dreaming Across Cultures

While studying these dreams, we began to notice a common theme. We’ve discovered that dreams play out much differently across different socio-cultural environments. We used a new software tool to map dream content that connects important psychosocial constructs and theories with words, phrases, and other linguistic constructions. That gave us an understanding about the kinds of dreams people were having. And we could model these statistically to test scientific hypotheses as to the nature of dreams.

The dreams of the BaYaka and Hadza were rich in community-oriented content, reflecting the strong social bonds inherent in their societies. This was in stark contrast to the themes prevalent in dreams from Western societies, where negative emotions and anxiety were more common.

Interestingly, while dreams from these forager communities often began with threats reflecting the real dangers they face daily, they frequently concluded with resolutions involving social support. This pattern suggests that dreams might play a crucial role in emotional regulation, transforming threats into manageable situations and reducing anxiety.

Here is an example of a Hadza dream laden with emotionally threatening content:

“I dreamt I fell into a well that is near the Hukumako area by the Dtoga people. I was with two others and one of my friends helped me get out of the well.”

Notice that the resolution to the dream challenges incorporated a social solution as an answer to the problem. Now, contrast this to the nightmare-disorder-diagnosed dreamers from Europe. They had scarier, open-ended narratives with less positive dream resolutions. Specifically, we found they had higher levels of dream content with negative emotions compared to the “normal” controls. Conversely, the Hadza exhibited significantly fewer negative emotions in their dreams. These are the kind of nightmares reported:

“My mom would call me on my phone and ask me to put it on speakerphone so my sister and cousin could hear. Crying she announced to us that my little brother was dead. I was screaming in sadness and crying in pain.”

“I was with my boyfriend, our relationship was perfect and I felt completely fulfilled. Then he decided to abandon me, which awoke in me a deep feeling of despair and anguish.”

The Functional Role of Dreams

Dreams are wonderfully varied. But what if one of the keys to humanity’s success as a species rests in our dreams? What if something was happening in our dreams that improved the survival and reproductive efforts of our Paleolithic ancestors?

A curious note from my comparative work, of all the primates alive, humans sleep the least, but we have the most REM. Why was REM—the state most often associated with dreams—so protected while evolution was whittling away our sleep? Perhaps something embedded in dreaming itself was prophylactic for our species?

Our research supports previous notions that dreams are not just random firings of a sleeping brain but may play a functional role in our emotional well-being and social cognition. They reflect the challenges and values of our waking life, offering insights into how we process emotions and threats. In forager societies, dreams often conclude with resolutions involving social support, suggesting that dreams might serve as a psychological mechanism for reinforcing social bonds and community values.

Why Dream?

The ultimate purpose of dreaming is still a subject of ongoing research and debate. Yet these themes seem to harbor within them universals that hint at some crucial survival function.

Some theories suggest that dreaming acts like a kind of virtual reality that serves to simulate threatening or social situations, helping individuals prepare for real-life challenges.

If this is indeed the case, then it’s possible that the dreams of our ancestors, who roamed the world in the distant Paleolithic era, played a crucial role in enhancing the cooperation that contributed to their survival.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Johannes Plenio / Unsplash

Scientists Coax Bacteria Into Making Exotic Proteins Not Found in Nature

0

Nature has a set recipe for making proteins.

Triplets of DNA letters translate into 20 molecules called amino acids. These basic building blocks are then variously strung together into the dizzying array of proteins that makes up all living things. Proteins form body tissues, revitalize them when damaged, and direct the intricate processes keeping our bodies’ inner workings running like well-oiled machines.

Studying the structure and activity of proteins can shed light on disease, propel drug development, and help us understand complex biological processes, such as those at work in the brain or aging. Proteins are becoming essential in non-biological contexts too, like for example, in the manufacturing of climate-friendly biofuels.

Yet with only 20 molecular building blocks, evolution essentially put a limit on what proteins can do. So, what if we could expand nature’s vocabulary?

By engineering new amino acids not seen in nature and incorporating them into living cells, exotic proteins could do more. For example, adding synthetic amino acids to protein-based drugs—such as those for immunotherapy—could slightly tweak their structure so they last longer in the body and are more effective. Novel proteins also open the door to new chemical reactions that chew up plastics or more easily degradable materials with different properties.

But there’s a problem. Exotic amino acids aren’t always compatible with a cell’s machinery.

A new study in Nature, led by synthetic biology expert Dr. Jason Chin at the Medical Research Council Laboratory of Molecular Biology in Cambridge, UK, brought the dream a bit closer. Using a newly developed molecular screen, they found and inserted four exotic amino acids into a protein inside bacteria cells. An industrial favorite for churning out insulin and other protein-based medications, the bacteria readily accepted the exotic building blocks as their own.

All the newly added components are different from the cell’s natural ones, meaning the additions didn’t interfere with the cell’s normal functions.

“It’s a big accomplishment to get these new categories of amino acids into proteins,” Dr. Chang Liu at the University of California, Irvine who was not part of the study, told Science.

A Synthetic Deadlock

Adding exotic amino acids into a living thing is a nightmare.

Picture the cell as a city, with multiple “districts” performing their own functions. The nucleus, shaped like the pit of an apricot, houses our genetic blueprint recorded in DNA. Outside the nucleus, protein-making factories called ribosomes churn away. Meanwhile, RNA messengers buzz between the two like high-speed trains shuttling genetic information to be made into proteins.

Like DNA, RNA has four molecular letters. Each three-letter combination forms a “word” encoding an amino acid. The ribosome reads each word and summons the associated amino acid to the factory using transfer RNA (tRNA) molecules to grab onto them.

The tRNA molecules are tailormade to pick up particular amino acids with a kind of highly specific protein “glue.” Once shuttled into the ribosome, the amino acid is plucked off its carrier molecule and stitched into an amino acid string that curls into intricate protein shapes.

Clearly, evolution has established a sophisticated system for the manufacture of proteins. Not surprisingly, adding synthetic components isn’t straightforward.

Back in the 1980s, scientists found a way to attach synthetic amino acids to a carrier inside a test tube. More recently, they’ve incorporated unnatural amino acids into proteins inside bacteria cells by hijacking their own inner factories without affecting normal cell function.

Beyond bacteria, Chin and colleagues previously hacked tRNA and its corresponding “glue”—called tRNA synthetase—to add an exotic protein into mouse brain cells.

Rewiring the cell’s protein building machinery, without breaking it, takes a delicate balance. The cell needs modified tRNA carriers to grab new amino acids and drag them to the ribosome. The ribosome then must recognize the synthetic amino acid as its own and stitch it into a functional protein. If either step stumbles, the engineered biological system fails.

Expanding the Genetic Code

The new study focused on the first step—engineering better carriers for exotic amino acids.

The team first mutated genes for the “glue” protein and generated millions of potential alternative versions. Each of these variants could potentially grab onto exotic buildings blocks.

To narrow the field, they turned to tRNA molecules, the carriers of amino acids. Each tRNA carrier was tagged with a bit of genetic code that attached to mutated “glue” proteins like a fishing hook. The effort found eight promising pairs out of millions of potential structures. Another screen zeroed in on a group of “glue” proteins that could grab onto multiple types of artificial protein building blocks—including those highly different from natural ones.

The team then inserted genes encoding these proteins into Escherichia coli bacteria cells, a favorite for testing synthetic biology recipes.

Overall, eight “glue” proteins successfully loaded exotic amino acids into the bacteria’s natural protein-making machinery. Many of the synthetic building blocks had strange backbone structures not generally compatible with natural ribosomes. But with the help of engineered tRNA and “glue” proteins, the ribosomes incorporated four exotic amino acids into new proteins.

The results “expand the chemical scope of the genetic code” for making new types of materials, the team explained in their paper.

A Whole New World

Scientists have already found hundreds of exotic amino acids. AI models such as AlphaFold or RoseTTAFold, and their variations, are likely to spawn even more. Finding carriers and “glue” proteins that match has always been a roadblock.

The new study establishes a method to speed up the search for new designer proteins with unusual properties. For now, the method can only incorporate four synthetic amino acids. But scientists are already envisioning uses for them.

Protein drugs made from these exotic amino acids are shaped differently than their natural counterparts, protecting them from decay inside the body. This means they last longer, and it lessens the need for multiple doses. A similar system could churn out new materials such as biodegradable plastic which, similar to proteins, also relies on stitching individual components together.

For now, the technology relies on the ribosome’s tolerance of exotic amino acids—which can be unpredictable. Next, the team wants to modify the ribosome itself to better tolerate strange amino acids and their carriers. They’re also looking to create protein-like materials made completely of synthetic amino acids, which could augment the function of living tissues.

“If you could encode the expanded set of building blocks in the same way that we can proteins, then we could turn cells into living factories for the encoded synthesis of polymers for everything from new drugs to materials,” said Chin in an earlier interview. “It’s a super-exciting field.”

Image Credit: National Institute of Allergy and Infectious Diseases, National Institutes of Health

IMF Says AI Will Upend Jobs and Boost Inequality. MIT CSAIL Says Not Fast.

0

The impact that AI could have on the economy is a hot topic following rapid advances in the technology. But two recent reports present conflicting pictures of what this could mean for jobs.

Ever since a landmark 2013 study from Oxford University researchers predicted that 47 percent of US jobs were at risk of computerization, the prospect that rapidly improving AI could cause widespread unemployment has been front and center in debates around the technology.

Reports forecasting which tasks, which professions, and which countries are most at risk have been a dime a dozen. But two recent studies from prominent institutions that reach very different conclusions are worth noting.

Last week, researchers at the International Monetary Fund suggested that as many as 40 percent of jobs worldwide could be impacted by AI, and the technology will most likely worsen inequality. But today, a study from MIT CSAIL noted that just because AI can do a job doesn’t mean it makes economic sense, and therefore, the rollout is likely to be slower than many expect.

The IMF analysis follows a similar approach to many previous studies by examining the “AI exposure” of various jobs. This involves breaking jobs down into a bundle of tasks and assessing which ones could potentially be replaced by AI. The study goes a step further though, considering which jobs are likely to be shielded from AI’s effects. For instance, many of a judge’s tasks are likely to be automatable, but society is unlikely to be comfortable delegating this kind of job to AI.

The study found that roughly 40 percent of jobs globally are exposed to AI. But the authors predict that advanced economies could see an even greater impact, with nearly 60 percent of jobs being upended by the technology. While around half of affected jobs are likely to see AI enhance the work of humans, the other half could see AI replacing tasks, leading to lower wages and reduced hiring.

In emerging markets and low-income countries, the figures are 40 percent and 26 percent, respectively. But while that could protect them from some of the destabilizing effects on the job market, it also means these economies are less able to reap the benefits of AI, potentially leading to increasing inequality at a global scale.

Similar dynamics are likely to play out within countries as well, according to the analysis, with some able to harness AI to boost their productivity and wages while others lose out. In particular, the researchers suggest that older workers are likely to struggle to adapt to the new AI-powered economy.

While the report provides a mixture of positive and negative news, in most of the scenarios considered AI seems likely to worsen inequality, the authors say. This means that policymakers need to start planning now for the potential impact, including by beefing up social safety nets and retraining programs.

The study from MIT CSAIL paints a different picture though. The authors take issue with the standard approach of measuring AI exposure, because they say it doesn’t take account of the economic or technical feasibility of replacing tasks carried out by humans with AI.

They point to the hypothetical example of a bakery considering whether to invest in computer vision technology to check ingredients for quantity and spoilage. While technically feasible, this task only accounts for roughly six percent of a bakers’ duties. In a small bakery with five bakers earning a typical salary of $48,000, this could potentially save the company $14,000 per year, clearly far less than the cost of developing and deploying the technology.

That prompted them to take a more economically grounded approach to assessing AI’s potential impact on the job market. First, they carried out surveys with workers to understand what performance would be required of an AI system. They then modeled the cost of building a system that could live up to those metrics, before using this to work out whether automation would be attractive in that scenario.

They focused on computer vision, as cost models are more developed for this branch of AI. They found that the large upfront cost of deploying AI meant that only 23 percent of work supposedly “exposed” to AI would actually make sense to automate. While that’s not insignificant, they say it would translate to a much slower rollout of the technology than others have predicted, suggesting that job displacement will be gradual and easier to deal with.

Obviously, most of the focus these days is on the job destroying potential of large language models rather than computer vision systems. But despite their more general nature, the researchers say that these models will still need to be fine-tuned for specific jobs (at some expense) and so they expect the economics to be comparable.

Ultimately, who is right is hard to say right now. But it seems prudent to prepare for the worst while simultaneously trying to better understand what the true impact of this disruptive technology could be.

Image Credit: Mohamed Nohassi / Unsplash

This Week’s Awesome Tech Stories From Around the Web (Through January 20)

0

ARTIFICIAL INTELLIGENCE

Mark Zuckerberg’s New Goal Is Creating Artificial General Intelligence
Alex Heath | The Verge
“Fueling the generative AI craze is a belief that the tech industry is on a path to achieving superhuman, god-like intelligence. OpenAI’s stated mission is to create this artificial general intelligence, or AGI. Demis Hassabis, the leader of Google’s AI efforts, has the same goal. Now, Meta CEO Mark Zuckerberg is entering the race.”

ROBOTICS

Why Everyone’s Excited About Household Robots Again
Melissa Heikkiläarchive page | MIT Technology Review
“Robotics is at an inflection point, says Chelsea Finn, an assistant professor at Stanford University, who was an advisor for the [Mobile ALOHA] project. In the past, researchers have been constrained by the amount of data they can train robots on. Now there is a lot more data available, and work like Mobile ALOHA shows that with neural networks and more data, robots can learn complex tasks fairly quickly and easily, she says.”

ENERGY

Global Emissions Could Peak Sooner Than You Think
Hannah Ritchie | Wired
“Every November, the Global Carbon Project publishes the year’s global CO2 emissions. It’s never good news. At a time when the world needs to be reducing emissions, the numbers continue to climb. However, while emissions have been moving in the wrong direction, many of the underpinning economic forces that drive them have been going the right way. This could well be the year when these various forces push hard enough to finally tip the balance.”

BIOTECH

Meet ReTro, the First Cloned Rhesus Monkey to Reach Adulthood
Miryam Naddaf | Nature Magazine
“For the first time, a cloned rhesus monkey (Macaca mulatta) has lived into adulthood—surviving for more than two years so far. The feat, described [this week] in Nature Communications, marks the first successful cloning of the species. It was achieved using a slightly different approach from the conventional technique that was used to clone Dolly the sheep and other mammals, including long-tailed macaques (Macaca fascicularis), the first primates to be cloned.”

VIRTUAL REALITY

I Literally Spoke With Nvidia’s AI-Powered Video Game NPCs
Sean Hollister | The Verge
“What if you could just… speak…to video game characters? Ask your own questions, with your own voice, instead of picking from preset phrases? Last May, Nvidia and its partner Convai showed off a fairly unconvincing canned demo of such a system—but this January, I got to try a fully interactive version for myself at CES 2024. I walked away convinced we’ll inevitably see something like this in future games.”

FUTURE

What Does Ukraine’s Million-Drone Army Mean for the Future of War?
David Hambling | New Scientist
“Ukraine’s president Volodymyr Zelensky has promised that in 2024 the country’s military will have a million drones. His nation already deploys hundreds of thousands of small drones, but this is a major change—a transition to a military with more drones than soldiers. What does that mean for the future of war?”

SPACE

Japan Reaches the Moon, but the Fate of Its Precision Lander Is Uncertain
Jonathan O’Callaghan | Scientific American
“…JAXA officials revealed that although SLIM is in contact with mission controllers and accurately responding to commands, the lander’s solar panels are not generating power, and much of the gathered data onboard the spacecraft have yet to be returned to Earth. The mission is consequently operating on batteries, which have the capacity to power its operations for several hours. After SLIM drains its batteries, its operations will cease—but the spacecraft may reawaken if its solar power supply can be restored.”

TRANSPORTATION

NASA Unveils X-59 Plane to Test Supersonic Flight Over US Cities
Matthew Sparkes | New Scientist
“‘Concorde’s sound would have been like thunder right overhead or a balloon popping right next to you, whereas our sound will be more of a thump or a rumble, more consistent with distant thunder or your neighbor’s car door down the street being closed,’ says Bahm. ‘We think that it’ll more blend into the background of everyday life than the Concorde did.'”

AUTOMATION

NASA’s Robotic, Self-Assembling Structures Could Be the Next Phase of Space Construction
Devin Coldewey | TechCrunch
“Bad news if you want to move to the moon or Mars: housing is a little hard to come by. Fortunately, NASA (as always) is thinking ahead, and has just shown off a self-assembling robotic structure that might just be a crucial part of moving off-planet. …The basic idea of the self-building structure is in a clever synergy between the building material—cuboctahedral frames they call voxels—and the two types of robots that assemble them.”

Image Credit: ZENG YILI / Unsplash

Mac at 40: Apple’s Love Affair With User Experience Sparked a Tech Revolution

0

Technology innovation requires solving hard technical problems, right? Well, yes. And no. As the Apple Macintosh turns 40, what began as Apple prioritizing the squishy concept of “user experience” in its 1984 flagship product is, today, clearly vindicated by its blockbuster products since.

It turns out that designing for usability, efficiency, accessibility, elegance, and delight pays off. Apple’s market capitalization is now over $2.8 trillion, and its brand is every bit associated with the term “design” as the best New York or Milan fashion houses are. Apple turned technology into fashion, and it did it through user experience.

It began with the Macintosh.

When Apple announced the Macintosh personal computer with a Super Bowl XVIII television ad on Jan. 22, 1984, it more resembled a movie premiere than a technology release. The commercial was, in fact, directed by filmmaker Ridley Scott. That’s because founder Steve Jobs knew he was not selling just computing power, storage or a desktop publishing solution. Rather, Jobs was selling a product for human beings to use, one to be taken into their homes and integrated into their lives.

Apple’s 1984 Super Bowl commercial is as iconic as the product it introduced.

This was not about computing anymore. IBM, Commodore, and Tandy did computers. As a human-computer interaction scholar, I believe that the first Macintosh was about humans feeling comfortable with a new extension of themselves, not as computer hobbyists but as everyday people. All that “computer stuff”—circuits and wires and separate motherboards and monitors—were neatly packaged and hidden away within one sleek integrated box.

You weren’t supposed to dig into that box, and you didn’t need to dig into that box—not with the Macintosh. The everyday user wouldn’t think about the contents of that box any more than they thought about the stitching in their clothes. Instead, they would focus on how that box made them feel.

Beyond the Mouse and Desktop Metaphor

As computers go, was the Macintosh innovative? Sure. But not for any particular computing breakthrough. The Macintosh was not the first computer to have a graphical user interface or employ the desktop metaphor: icons, files, folders, windows, and so on. The Macintosh was not the first personal computer meant for home, office, or educational use. It was not the first computer to use a mouse. It was not even the first computer from Apple to be or have any of these things. The Apple Lisa, released a year before, had them all.

It was not any one technical thing that the Macintosh did first. But the Macintosh brought together numerous advances that were about giving people an accessory—not for geeks or techno-hobbyists, but for home office moms and soccer dads and eighth grade students who used it to write documents, edit spreadsheets, make drawings, and play games. The Macintosh revolutionized the personal computing industry and everything that was to follow because of its emphasis on providing a satisfying, simplified user experience.

Where computers typically had complex input sequences in the form of typed commands (Unix, MS-DOS) or multi-button mice (Xerox STAR, Commodore 64), the Macintosh used a desktop metaphor in which the computer screen presented a representation of a physical desk surface. Users could click directly on files and folders on the desktop to open them. It also had a one-button mouse that allowed users to click, double click, and drag and drop icons without typing commands.

The Xerox Alto had first exhibited the concept of icons, invented in David Canfield Smith’s 1975 PhD dissertation. The 1981 Xerox Star and 1983 Apple Lisa had used desktop metaphors. But these systems had been slow to operate and still cumbersome in many aspects of their interaction design.

The Macintosh simplified the interaction techniques required to operate a computer and improved functioning to reasonable speeds. Complex keyboard commands and dedicated keys were replaced with point-and-click operations, pull-down menus, draggable windows and icons, and systemwide undo, cut, copy, and paste. Unlike with the Lisa, the Macintosh could run only one program at a time, but this simplified the user experience.

Apple CEO Steve Jobs introduced the Macintosh in 1984.

The Macintosh also provided a user interface toolbox for application developers, enabling applications to have a standard look and feel by using common interface widgets such as buttons, menus, fonts, dialog boxes, and windows. With the Macintosh, the learning curve for users was flattened, allowing people to feel proficient in short order. Computing, like clothing, was now for everyone.

A Good Experience

Although I hesitate to use the cliches “natural” or “intuitive” when it comes to fabricated worlds on a screen—nobody is born knowing what a desktop window, pull-down menu, or double click is—the Macintosh was the first personal computer to make user experience the driver of technical achievement. It indeed was simple to operate, especially compared with command-line computers at the time.

Whereas prior systems prioritized technical capability, the Macintosh was intended for nonspecialist users—at work, school, or in the home—to experience a kind of out-of-the-box usability that today is the hallmark of not only most Apple products but an entire industry’s worth of consumer electronics, smart devices, and computers of every kind.

According to Market Growth Reports, companies devoted to providing user experience tools and services were worth $548.91 million in 2023 and are expected to reach $1.36 billion by 2029. User experience companies provide software and services to support usability testing, user research, voice-of-the-customer initiatives, and user interface design, among many other user experience activities.

Rarely today do consumer products succeed in the market based on functionality alone. Consumers expect a good user experience and will pay a premium for it. The Macintosh started that obsession and demonstrated its centrality.

It is ironic that the Macintosh technology being commemorated in January 2024 was never really about technology at all. It was always about people. This is inspiration for those looking to make the next technology breakthrough, and a warning to those who would dismiss the user experience as only of secondary concern in technological innovation.

Author disclosure statement: I have had two PhD students receive Apple PhD AI/ML Fellowships. This funding does not support me personally, but supports two of the PhD students that I have advised. They obtained these fellowships through competitive submissions to Apple based on an open solicitation.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: The original Macintosh computer may seem quaint today, but the way users interacted with it triggered a revolution 40 years ago. Mark Mathosian/Flickr, CC BY-NC-SA

Why What We Decide to Name New Technologies Is So Crucial

0

Back in 2017, my editor published an article titled “The Next Great Computer Interface Is Emerging—But It Doesn’t Have a Name Yet.” Seven years later, which may as well be a hundred in technology years, that headline hasn’t aged a day.

Last week, UploadVR broke the news that Apple won’t allow developers for their upcoming Vision Pro headset to refer to applications as VR, AR, MR, or XR. For the past decade, the industry has variously used terms like virtual reality (VR), augmented reality (AR), mixed reality (MR), and extended reality (XR) to describe technologies that include things like VR headsets. Apple, however, is making it clear that developers should refer to their apps as “spatial” or use the term “spatial computing.” They’re also asking developers not to refer to the device as a headset (whoops). Apple calls it a “spatial computer,” and VR mode is simply “fully immersive.”

It remains to be seen whether Apple will strictly enforce these rules, but the news sparked a colorful range of reactions from industry insiders. Some amusingly questioned what an app like VRChat, one of the most popular platforms in the industry with millions of monthly active users, should do. Others debated at the intersection of philosophy of language and branding to explore Apple’s broader marketing strategy.

Those who have worked in this area are certainly aware of the longstanding absurdity of relying on an inconsistent patchwork of terms.

While no one company has successfully forced linguistic consensus yet, this is certainly not the first time a company has set out to define this category in the minds of consumers.

In 2017, as Google first started selling VR devices, they attempted to steer the industry toward the term “immersive computing.” Around the same time Microsoft took aim at branding supremacy by fixating on the label “mixed reality.” And everyone will remember that Facebook changed the company’s name in an effort to define the broader industry as “the metaverse.”

The term spatial computing is certainly not an Apple invention. It’s thought to have been first introduced in the modern sense by MIT’s Simon Greenwold in his 2003 thesis paper, and has been in use for much of the past decade. Like many others, I’ve long found the term to be the most useful at capturing the main contribution of these technologies—that they make use of three-dimensional space to develop interfaces that are more intuitive for our nervous systems.

A winding etymological journey for a technology is also not unique to computer interfaces. All new technologies cycle through ever-evolving labels that often start by relating them to familiar concepts. The word “movie” began life as “moving picture” to describe how a collection of still images seemed to “move,” like flipping through a picture book. In the early 1900s, the shorter slang term movie appeared in comic strips and quickly caught on with the public. Before the term “computer” referred to machines, it described a person whose job was to perform mathematical calculations. And the first automobiles were introduced to the public as “horseless carriages,” which should remind us of today’s use of the term “driverless car.”

Scholars of neuroscience, linguistics, and psychology will be especially familiar with the ways in which language—and the use of words—can impact how we relate to the world. When a person hears a word, a rich network of interconnected ideas, images, and associations is activated in our mind. In that sense, words can be thought of as bundles of concepts and a shortcut to making sense of the world.

The challenge with labeling emerging technologies is they can be so new to our experience, our brains haven’t yet constructed a fixed set of bundled concepts to relate to.

The word “car,” for example, brings to mind attributes like “four wheels,” “steering wheel,” and “machine used to move people around.” Over time, bundles of associations like these become rooted in the mind as permanent networks of relationships which can help us quickly process our environment. But this can also create limitations and risk overlooking disruptions due to an environment which has changed. Referring to autonomous driving technology as “driverless cars” might result in someone overlooking a “driverless car” small enough to carry packages on a sidewalk. It’s the same technology, but not one most people might refer to as a car.

This might sound like useless contemplation on the role of semantics, but the words we use have real implications on the business of emerging technologies. In 1980, AT&T hired the consultancy McKinsey to predict how many people would be using mobile phones by the year 2000. Their analysis estimated no more than 900,000 devices by the turn of the century, and because of the advice, AT&T exited the hardware business. Twenty years later, they recognized how unhelpful that advice had been as 900,000 phones were being sold every three days in North America alone.

While in no way defending their work, I hold the opinion that in some ways McKinsey wasn’t wrong. Both AT&T and McKinsey may have been misled by the bundle of concepts the word “mobile phone” would have elicited in the year 1980. At that time, devices were large, as heavy as ten pounds or more, cost thousands of dollars, and had a painfully short battery life. There certainly wasn’t a large market for those phones. A better project for AT&T and McKinsey might have been to explore what the term “mobile phone” would even refer to in 20 years. Those devices were practical, compact, and affordable.

A more recent example might be the term “metaverse.” A business operations person focused on digital twins has a very different bundle of associations in their mind when hearing the word metaverse than a marketing person focused on brand activations in virtual worlds like Roblox. I’ve worked with plenty of confused senior leaders who have been pitched very different kinds of projects carrying the label “metaverse,” leading to uncertainty about what the term really means.

As for our as-of-yet-unnamed 3D computing interfaces, it’s still unclear what label will conquer the minds of mainstream consumers. During an interview with Matt Miesnieks, a serial entrepreneur and VC, about his company 6D.ai—which was later sold to Niantic—I asked what we might end up calling this stuff. Six years after that discussion, I’m reminded of his response.

“Probably whatever Apple decides to call it.”

Image Credit: James Yarema / Unsplash

Google DeepMind’s New AI Matches Gold Medal Performance in Math Olympics

0

After cracking an unsolvable mathematics problem last year, AI is back to tackle geometry.

Developed by Google DeepMind, a new algorithm, AlphaGeometry, can crush problems from past International Mathematical Olympiads—a top-level competition for high schoolers—and matches the performance of previous gold medalists.

When challenged with 30 difficult geometry problems, the AI successfully solved 25 within the standard allotted time, beating previous state-of-the-art algorithms by 15 answers.

While often considered the bane of high school math class, geometry is embedded in our everyday life. Art, astronomy, interior design, and architecture all rely on geometry. So do navigation, maps, and route planning. At its core, geometry is a way to describe space, shapes, and distances using logical reasoning.

In a way, solving geometry problems is a bit like playing chess. Given some rules—called theorems and proofs—there’s a limited number of solutions to each step, but finding which one makes sense relies on flexible reasoning conforming to stringent mathematical rules.

In other words, tackling geometry requires both creativity and structure. While humans develop these mental acrobatic skills through years of practice, AI has always struggled.

AlphaGeometry cleverly combines both features into a single system. It has two main components: A rule-bound logical model that attempts to find an answer, and a large language model to generate out-of-the-box ideas. If the AI fails to find a solution based on logical reasoning alone, the language model kicks in to provide new angles. The result is an AI with both creativity and reasoning skills that can explain its solution.

The system is DeepMind’s latest foray into solving mathematical problems with machine intelligence. But their eyes are on a larger prize. AlphaGeometry is built for logical reasoning in complex environments—such as our chaotic everyday world. Beyond mathematics, future iterations could potentially help scientists find solutions in other complicated systems, such as deciphering brain connections or unraveling genetic webs that lead to disease.

“We’re making a big jump, a big breakthrough in terms of the result,” study author Dr. Trieu Trinh told the New York Times.

Double Team

A quick geometry question: Picture a triangle with both sides equal in length. How do you prove the bottom two angles are exactly the same?

This is one of the first challenges AlphaGeometry faced. To solve it, you need to fully grasp rules in geometry but also have creativity to inch towards the answer.

“Proving theorems showcases the mastery of logical reasoning…signifying a remarkable problem-solving skill,” the team wrote in research published today in Nature.

Here’s where AlphaGeometry’s architecture excels. Dubbed a neuro-symbolic system, it first tackles a problem with its symbolic deduction engine. Imagine these algorithms as a grade A student that strictly studies math textbooks and follows rules. They’re guided by logic and can easily lay out every step leading to a solution—like explaining a line of reasoning in a math test.

These systems are old school but incredibly powerful, in that they don’t have the “black box” problem that haunts much of modern deep learning algorithms.

Deep learning has reshaped our world. But due to how these algorithms work, they often can’t explain their output. This just won’t do when it comes to math, which relies on stringent logical reasoning that can be written down.

Symbolic deduction engines counteract the black box problem in that they’re rational and explainable. But faced with complex problems, they’re slow and struggle to flexibly adapt.

Here’s where large language models come in. The driving force behind ChatGPT, these algorithms are excellent at finding patterns in complicated data and generating new solutions, if there’s enough training data. But they often lack the ability to explain themselves, making it necessary to double check their results.

AlphaGeometry combines the best of both worlds.

When faced with a geometry problem, the symbolic deduction engine gives it a go first. Take the triangle problem. The algorithm “understands” the premise of the question, in that it needs to prove the bottom two angles are the same. The language model then suggests drawing a new line from the top of the triangle straight down to the bottom to help solve the problem. Each new element that moves the AI towards the solution is dubbed a “construct.”

The symbolic deduction engine takes the advice and writes down the logic behind its reasoning. If the construct doesn’t work, the two systems go through multiple rounds of deliberation until AlphaGeometry reaches the solution.

The whole setup is “akin to the idea of ‘thinking, fast and slow,’” wrote the team on DeepMind’s blog. “One system provides fast, ‘intuitive’ ideas, and the other, more deliberate, rational decision-making.”

We Are the Champions

Unlike text or audio files, there’s a dearth of examples focused on geometry, which made it difficult to train AlphaGeometry.

As a workaround, the team generated their own dataset featuring 100 million synthetic examples of random geometric shapes and mapped relationships between points and lines—similar to how you solve geometry in math class, but at a far larger scale.

From there, the AI grasped rules of geometry and learned to work backwards from the solution to figure out if it needed to add any constructs. This cycle allowed the AI to learn from scratch without any human input.

Putting the AI to the test, the team challenged it with 30 Olympiad problems from over a decade of previous competitions. The generated results were evaluated by a previous Olympiad gold medalist, Evan Chen, to ensure their quality.

In all, the AI matched the performance of past gold medalists, completing 25 problems within the time limit. The previous state-of-the-art result was 10 correct answers.

“AlphaGeometry’s output is impressive because it’s both verifiable and clean,” Chen said. “It uses classical geometry rules with angles and similar triangles just as students do.”

Beyond Math

AlphaGeometry is DeepMind’s latest foray into mathematics. In 2021, their AI cracked mathematical puzzles that had stumped humans for decades. More recently, they used large language models to reason STEM problems at the college level and cracked a previously “unsolvable” math problem based on a card game with the algorithm FunSearch.

For now, AlphaGeometry is tailored to geometry, and with caveats. Much of geometry is visual, but the system can’t “see” the drawings, which could expedite problem solving. Adding images, perhaps with Google’s Gemini AI, launched late last year, may bolster its geometric smarts.

A similar strategy could also expand AlphaGeometry’s reach to a wide range of scientific domains that require stringent reasoning with a touch of creativity. (Let’s be real—it’s all of them.)

“Given the wider potential of training AI systems from scratch with large-scale synthetic data, this approach could shape how the AI systems of the future discover new knowledge, in math and beyond,” wrote the team.

Image Credit: Joel Filipe / Unsplash 

Psychedelics Rapidly Fight Depression—a New Study Offers a First Hint at Why

0

Depression is like waking up to a rainy, dreary morning, every single day. Activities that previously lightened the mood lose their joy. Instead, every social interaction and memory is filtered through a negative lens.

This aspect of depression, called negative affective bias, leads to sadness and rumination—where haunting thoughts tumble around endlessly in the brain. Scientists have long sought to help people out of these ruts and back into a positive mindset by rewiring neural connections.

Traditional antidepressants, such as Prozac, cause these changes, but they take weeks or even months. In contrast, psychedelics rapidly trigger antidepressant effects with just one shot and last for months when administered in a controlled environment and combined with therapy.

Why? A new study suggests these drugs reduce negative affective bias by shaking up the brain networks that regulate emotion.

In rats with low mood, a dose of several psychedelics boosted their “outlook on life.” Based on several behavioral tests, ketamine—a party drug known for its dissociative high—and the hallucinogen scopolamine shifted the rodents’ emotional state to neutral.

Psilocybin, the active ingredient in magic mushrooms, further turned the emotional dial towards positivity. Rather than Debbie Downers, these rats adopted a sunny mindset with an openness to further learning, replacing negative thoughts with positive ones.

The study also gave insight into why psychedelics seem to work so fast.

Within a day, ketamine rewired brain circuits that shifted the emotional tone of memories, but not their content. The changes persisted long after the drugs left the body, possibly explaining why a single shot could have lasting antidepressant effects. When treated with both high and low doses of the psychedelics, lower doses especially helped reverse negative cognitive bias—hinting it’s possible to lower psychedelic doses and still retain therapeutic effect.

The results could “explain why the effects of a single treatment in human patients can be long-lasting, days (ketamine) to months (psilocybin),” said lead author Emma Robinson in a press release.

A Brainy Road Trip

Psychedelics are experiencing a renaissance. Once maligned as hippie drugs, scientists and regulators are increasingly taking them seriously as potential mental health therapies for depression, post-traumatic stress disorder, and anxiety.

Ketamine paved the way. Often used as anesthesia for farm animals or as a party drug, ketamine caught the attention of neuroscientists for its intriguing action in the brain—especially the hippocampus, which supports memories and emotions.

Our brain cells constantly reshuffle their connections. Called “neural plasticity,” changes in neural networks allow the brain to learn new things and encode memories. When healthy, neurons expand their branches, each dotted with multiple synapses linking to neighbors. In depression, these channels erode, making it more difficult to rewire the brain when faced with new learning or environments.

The hippocampus also gives birth to new neurons in rodents and, arguably, in humans. Like adding transistors to a computer chip, these baby neurons reshape information processing in the brain.

Ketamine spurs both these processes. An earlier study in mice found the drug increases the birth of baby neurons to lower depression. It also rapidly changed neural connections inside established hippocampal networks, making them more plastic. These studies in rodents, along with human clinical trials, propelled the US Food and Drug Administration (FDA) to greenlight a version of the drug in 2019 for people with depression who have tried other antidepressant medications but didn’t respond to them.

While psilocybin and other mind-altering drugs are gaining steam as fast-acting antidepressants, we’re still in the dark on how they work in the brain. The new study followed ketamine’s journey and dug deeper by testing it and other hallucinogens in a furry little critter.

Rat Race

The team started with a group of depressed rats.

Rats aren’t humans. But they are highly intelligent, social creatures that experience a wide range of emotions. They’re empathic towards friends, “laugh” in glee when tickled, and feel low after facing the equivalent of rodent mean girls. Also, scientists can examine their neural networks before and after psychedelic treatments and hunt changes in their neural connections.

Instead of tackling all aspects of depression, the new study focused on one facet: negative affective bias, which paints life in sad sepia tones. Rats can’t express their emotional states, so a few years back, the same team established a way to measure how they’re “viewing” the world by observing them digging for rewards.

In one trial, the rodents were allowed to dig through different materials—some led to a tasty treat, others not. Eventually, the critters learned their favorite material and how to choose between two best choices. It’s a bit like learning which door to open to get your midnight snack—freezer for ice cream or fridge for cake.

To induce negativity, the team injected them with two chemicals known to reduce mood. Some animals subsequently also had a dose of psilocybin, ketamine, or scopolamine, whereas others got salt water as a control.

When faced with their two favorites, the depressed rats given salt water didn’t seem to care. Despite knowing digging would lead to a treat, they languished when going for their preferred material. It’s like trying to get out of bed when depressed, but knowing you have to eat.

This is “consistent with a negatively biased memory,” the team wrote.

In contrast, depressed rats given a shot of psychedelic acted as they normally would. They went after their favorite pick without a thought. They did experience a “high,” shaking their fur like a wet dog, which is a common sign.

Psychedelics can tamper with memory. To make sure that wasn’t the case here, the team redid the test but without triggering any emotional bias. Rats treated with a low dose of psychedelics shifted their mood towards positivity, without notable side effects. However, higher doses of ketamine inhibited their ability to learn, suggesting there may be an overall effect on memory, rather than mood itself.

Psilocybin stood out amongst the group. When given before a test, the drug shifted the animals’ choices past neutral towards happier outcomes. Even when depressed, they eagerly dug through their favorite materials, knowing it would lead to a reward. Conventional antidepressants can shift negative bias back to neutral, but they don’t change existing memories. Psilocybin seems to be able to “paint over” darker memories—at least in rats.

In a final test, the team directly injected ketamine into the frontal parts of depressed rats’ brains. This region connects extensively with the brain’s memory and emotional centers. The treatment also shifted the rodent’s negative mood towards a neutral one.

To be very clear: The negative bias in the study was induced by chemicals and is not an exact replica for human emotions. It’s also hard to gauge a rat’s emotional state. But the study gave insight into how brain networks change with psychedelics, which could help develop drugs that mimic these chemicals but without the high.

“One thing we are now trying to understand is whether these dissociative or hallucinogenic effects involve the same or different underlying mechanisms and whether it might be possible to have rapid-acting antidepressants without these other effects,” said the team.

Image Credit: Diane SerikUnsplash