How Artificial Immune Systems May Be the Future of Cybersecurity

10,886 2 Loading

2015 was a year of jaw-dropping hacks.

From CIA director John Brennan’s private email to Sony Inc, from the IRS to CVS, from Target to the notorious Ashley Madison, millions of people suffered from cybersecurity breakdowns across industries. According to the Ponemon Institute, the average cost of damages from data breaches in the US hit a staggering $6.5 million this year, up $600,000 from 2014.

Untallied are the personal costs to the hacker’s victims: the stress associated with leaked phone numbers, credit card information, social security numbers, tax information, and the time spent getting their lives back on track.

The sophistication and scope of cyber threats are expected to further escalate, yet our defenses remain rudimentary, even medieval. Overwhelmingly, the current strategy is to define the threats, and then build strong defensive walls focused on keeping nefarious agents, viruses or programs out.

cyber-immunity-2Once hackers tunnel through, however, our information is ripe for the picking. Without any means of tracking hackers as they plow through our systems, current defenses are incapable of sounding alarms until it’s too late.

What’s more, security walls are useless against hacks that arise from within, such as those initiated by disgruntled employees or through social engineering. After all, how do you find something when you don’t know what you’re looking for?

Yet according to cybersecurity company Darktrace, we are far from fighting a losing war. All we need is to look to biology for a little inspiration.

Biological warfare

The battle between virus and host has played out inside our bodies for millions of years. Through evolution, nature has crafted us into highly sophisticated forts that block off outside invaders and viciously attack inside threats.

These are epic battles with multiple fronts. The skin, a highly sophisticated barrier, wards off most external insults aiming to penetrate in. Similar to a digital firewall, it’s tough, adaptive, and constantly renewed to reinforce its strength.

Yet all walls crumble.

In cybersecurity, a lost wall most likely means a lost battle. Biowarfare paints a different picture altogether.

Once nefarious agents break through, our internal defense — the immune system — kicks into high gear. In a way, our bodies are highly functional police states: the immune system constantly monitors our internal environment, ensuring that its billions of molecular citizens smoothly carry out their respective roles. It learns and memorizes what’s normal, so when something strange happens, however sophisticated or novel, it knows to react.

The similarity between cyber and biological warfare is tough to ignore: in both cases, we deal with evolving adversaries that grow in complexity and gradually vary their means of attack. But because the immune system discriminates between “self” and “other,” it is so powerful that most of the time we aren’t even consciously aware that we’re under siege.

The biological immune system obviously works. So why not extend the metaphor a bit further and build a cyber immune system to protect our digital selves?


Since the early 1980s, computer scientists have toyed around with the idea of cyberimmunity. But at that time, AI still wasn’t up to the task — no algorithms could adaptively learn complex patterns and extrapolate to new ones.

With recent leaps forward in AI and deep learning, that’s set to change. Using these algorithms, scientists are starting to replicate the two main features of an adaptive immune system — learning and memory.

“Our system is self-learning, understanding what normal looks like and detecting emerging anomalies in real-time,” explains Darktrace in a promotional video.

Here’s how it works: The algorithms automatically model every device, user and network within an enterprise, allowing the system to build a full understanding of how information normally flows. This lets the program extrapolate a “threat visualization interface” to topographically map out the largest threats, thus letting cybersecurity analysts focus on top or in progress threats.

Like the immune system, Darktrace deals with a lot of noise from a system’s various components. The body handles this with a threshold response. When an injury reaches a certain level of severity, for example, the immune system activates cascades of molecular signals that recruit the cavalry — specialized immune cells such as the aptly named “killer T cell” — to the site of injury and cleans up any potential infection.

A cyberimmune system works a little bit differently. To learn what’s normal, it silently sits in the background and monitors things for a few weeks before it’s ready to detect strange happenings. Rather than flagging all suspicious activity, which could lead to overwhelming false-positives, it churns out advice based on probabilities, continuously updating its results in the light of changing evidence.

The system can also automatically cut off infiltrating agents from sensitive information, setting up a “honey pot” scenario where it traps the hacker and observes how they behave — what information they’re after, how they work, and maybe even where they came from.

So far, according to TheLong+Short, Darktrace works pretty well at picking out suspicious activity, including password compromises, anomalous internal file transfers and infections with ransomware.

That said, the system isn’t perfect.

And some of that is due to inherent faults of the biological immune system that it was based on. Autoimmunity is an obvious one — in some cases, the infectious agent is so similar to components of our own body that the immune system loses its ability to distinguish between self and other. Instead, as it delivers its brutal attacks, it inadvertently also damages our own organs.

Along the same lines, could cyber autoimmunity ever become an issue?

There are already cases of anti-virus software identifying core computer code as malicious malware and shutting it down. As hackers become increasingly sophisticated in their attack strategy, it may be possible to change bits of the network so that they look suspicious and are blocked off by the cyberimmune algorithms. Like the HIV virus, which seeks out and shuts down our immune system, hackers may even opt to directly attack cyberimmunity rather than circumvent it.

The results could be just as deadly.

In the end, security will always be a cat-and-mouse game, and nothing is 100% safe. But having an automated learning system that continuously finds and quarantines new threats definitely gives us the upper hand. It’s likely Darktrace is simply a step towards future, more sophisticated biomimetic cybersecurity systems.

Image Credit:

Shelly Fan

Shelly Xuelai Fan is a neuroscientist at the University of California, San Francisco, where she studies ways to make old brains young again. In addition to research, she's also an avid science writer with an insatiable obsession with biotech, AI and all things neuro. She spends her spare time kayaking, bike camping and getting lost in the woods.

Discussion — 2 Responses

  • CWP December 28, 2015 on 3:56 pm

    As a computational immunologist- this article was right up my alley! Thanks for the great perspective.

  • jct405 December 29, 2015 on 8:10 am

    Dear Ms. Fan, first, the bad guys are as capable of designing effective AI as the good guys. Merely comparing predictive analytics (already flourishing in information security) to human biological systems adds little to the discussion. Second, my own opinion, for what it might be worth, is that the Ponemon Institute Study is not a good study. It is more a PR piece that emphasizes the scary (possibly valid but sloppy) conclusions of a study that, when closely inspected, (the conclusions) do not seem to be supported by the data collected.

    Cybercrime is a threat to be sure. The biggest threat to you, your readers and to me as individual’s lies in the fact that lenders and credit card issuers are not required to contact us before issuing credit against us. The solution is is to contact the rating agencies and “freeze” your credit report, thus requiring lenders to contact you directly to authenticate a credit application before issuing credit. This requires a single, five-minute phone call to one of three credit reporting agencies. By law, the credit reporting agency must freeze your credit report, call the other two to do the same and report to you that they have so complied with the law.

    All the above is better explained here along with the numbers to call:

    “Freezing” your credit report absolutely does not blemish your credit history. It merely requires lenders to contact you directly to authenticate the initial credit application. Which is only reasonable, right? Except, be careful, the credit rating agency you contact may attempt to convince you that somehow you are going to weaken your credit by “freezing” your credit report. Total bunk. Your credit report is your credit report. Your “number” does not change merely because you demand that lenders contact you first to authenticate loan or credit card applications.

    Also, be wary of the credit agency representative you reach. These folks are employed in call centers and their objective is to first appear as though they are doing you a great service (freezing your account is not a service but a legal requirement). When I froze my credit reports recently the credit rating agency representative tried to sell me a $20 per month credit monitoring service. They would even monitor the “dark web” and alert me when my personal data was discovered “out there in the wild.” My take on the situation is that the credit rating agencies have probably already sold my data to bad guys posing as legitimate lenders. The point now is to make sure the bad guys cannot use it. Use And freeze your credit report.

    Remember, credit reports are a business. Lending is a business. Credit agencies make money by selling your credit report to lenders. Lenders make money by issuing credit to credit-worthy credit reports. Bad guys make it easy for them by getting the ball rolling. Cybercrime is big business. Way bigger than bootlegging during prohibition. Organized crime syndicates operating out of the reach of our laws employ thousands of well-educated “security researchers” to code malware (AI included), collect personal data and marry enough personal data to enough freely-available credit applications to put us all at risk.

    The real question for all of us is why do we elect political representatives who do not watch our backs as they should? Why do we have to call and freeze our credit reports in the first place? Why not make it by default that lenders have to authenticate credit applications by calling us before issuing credit and expecting us to pay it back with interest?