With just a week to go before Christmas, he might have been hoping for a quick getaway at the end of a long shift. That was when the airport security officer first noticed the drones: two cross-shaped objects with flashing lights, buzzing around in the sky.

In most places, this might be a mild curiosity; but on the airfield at London Gatwick, the second-busiest airport in the UK, it was the cause of days of chaos. As more than fifty drone sightings were reported over the next few days, flights were unable to take off for thirty-six hours. Hundreds of thousands of passengers were affected, facing delays that ran into days and forcing holiday cancellations.

The estimated cost to airlines, the airport, and passengers is likely to run into tens of millions of pounds. As I write this, no one knows who was behind what appears to be a coordinated attack, or even how many drones were involved. Another, more recent sighting of drones at London’s biggest airport, Heathrow, suggests that the perpetrators may still be at large—or they may have inspired copycats.

The overwhelming sentiments from those affected and from the public were incredulity and fear. Drones have been around for a while: the first commercial drone permits were issued in 2006. By 2016, over 2.4 million personal drones were sold in the US. This was hardly some unforeseeable use of rare, futuristic technology to wreak havoc; one of the passengers caught up in the mayhem had to swallow the bitter irony of an advertisement for drones on his plane ticket.

How was it possible that the UK appeared to have no countermeasures? How could it be that one of the world’s busiest airports could be brought to its knees by, quite possibly, one person using a toy that you can buy online? And, now that the precedent has been set, how many more airports would be targeted?

For some, the exasperation had a hint of “I-told-you-so.” The Campaign to Stop Killer Robots released a nightmarish video, Slaughterbots, which depicted a scenario where swarms of autonomous micro-drones, equipped with explosives, went on the rampage.

It was easy enough to point out the video’s hyperbole, as former DoD employee and autonomous weapons expert Paul Scharre did in an article for IEEE Spectrum:

“In the video, we’re told the drones can defeat any countermeasure. This isn’t fiction; it’s farce. Every military technology has a countermeasure, and countermeasures against small drones aren’t even hypothetical. The US government is actively working on ways to shoot down, jam, fry, and hack, ensnare, or otherwise defeat small drones.”

Scharre goes on to argue that “[For occasional, small-scale attacks], people are unlikely to absorb the inconveniences of building robust defenses, just like people don’t wear body armor to protect against the unlikely event of being caught in a mass shooting. But if an enemy country built hundreds of thousands of drones to wipe out a city, you bet there’d be a run on chicken wire. Any weapon that can be defeated by a net isn’t a weapon of mass destruction.”

It’s probably true that Slaughterbots exaggerates the scale of the problem for dramatic effect. As it was, they were unable to convince the UN to ban further development of autonomous weapons, with the US and Britain amongst those who blocked the consensus. Even if it were enacted, a ban does nothing to prevent commercial or civilian drones being modified for use as weapons, as in the case of the unsuccessful assassination attempt on Venezuelan President Maduro, where ordinary drones had an explosive payload.

Many of the same capacities that would make drones more useful or fun—such as longer flight times and battery lives, higher speeds, autonomous flying, and the ability to lock on to individuals—would also make them more effective weapons. How can we regulate this kind of dual-use progress?

There were parallels between the video’s nightmarish future and the Gatwick fiasco. In both cases, attackers could use drones to wreak havoc with asymmetric warfare. Civilian and military authorities appeared completely unprepared, and their countermeasures failed. Identifying the perpetrators? “It could have been anyone.”

In reality, countermeasures exist.

Some reporting suggested that the Israeli “Drone Dome,” which operates by jamming the radio signals used to communicate with the drone, allowed Gatwick to reopen—but this would not work against fully autonomous drones. Signal-jammers might be a tad easier to deploy than lasers mounted on trucks, but could equally just trigger an arms race as drone technology develops.

There are complicated laws governing the electromagnetic spectrum, and the circumstances under which one can jam signals. While some drones automatically return to base when the control signal is cut, others might behave unpredictably, dropping out of the sky. This provides no defense against a kamikaze drone.

UK authorities were reluctant to attempt to shoot down the drone, due to the risk from stray bullets and the difficulty of locating the drone. Following the fiasco, guards at Heathrow will be armed with bazookas that shoot nets—again, a countermeasure that works fine providing you can locate the drone. The Dutch police have even trained eagles, which can see the quickly-rotating blades far better than humans can, to take drones out of the sky.

So-called “geofencing” is supposed to create no-fly zones for drones around restricted areas, providing the drone is sourced from a legitimate manufacturer and not constructed from a kit. But, as Anna Jackman notes in The Conversation, “Numerous reports have shown how preventative defences built into drones such as geo-fencing or altitude restrictions can be tampered with, overridden, or even simply switched off.” (It seems likely, in the wake of this crisis, that Britain’s Parliament will introduce legislation to make geo-fencing mandatory for drone manufacturers—when they’re a bit less distracted.)

It may well be possible that, after the novelty wears off, defending airports against this kind of attack will become routine. Both Gatwick and Heathrow are now investing millions in military-grade anti-drone technology, and there’s clearly no shortage of options. Perhaps these technologies will prove to be effective, and drone disruptions or attacks on airports, power plants, and other critical infrastructure points will regularly be thwarted.

But the Gatwick crisis illustrates that countermeasures to new technologies are not always available where they’re needed, and they’re not always easy to deploy. New technologies often allow a larger number of people to wield a greater amount of power. In the case of drones, the warning signs have been there for a decade, millions of drones have been sold, and yet one of the largest airports in the world was woefully unprepared for a single drone.

We urgently need a reasonable and informed debate about the consequences, and possible malicious uses, of the new technologies that we are developing so that we can prepare ourselves for the next “unprecedented” technological risk. At Gatwick, thousands of people were severely inconvenienced, and many paid a hefty financial price—but no one died. Next time, we might not be so lucky.

Image Credit: Lukas Gojda / Shutterstock.com

Thomas Hornigold is a physics student at the University of Oxford. When he's not geeking out about the Universe, he hosts a podcast, Physical Attraction, which explains physics - one chat-up line at a time.

Follow Thomas: