Explore Topics:
AIBiotechnologyRoboticsComputingFutureScienceSpaceEnergyTech
Artificial Intelligence

Hackers Are Automating Cyberattacks With AI. Defenders Are Using It to Fight Back.

Which side has the advantage will depend less on raw model capabilities and more on who adapts fastest.

Edd Gent
Mar 09, 2026
A half open laptop with a colorful display in a darkened room

Image Credit

Joshua Woroniecki on Unsplash

Share

Cybersecurity is an endless game of cat and mouse as attackers and defenders refine their tools. Generative AI systems are now joining the fray on both sides of the battlefield.

Though cybersecurity experts and model developers have been warning about potential AI-powered cyberattacks for years, there has been limited evidence hackers were widely exploiting the technology. But that is starting to change.

Growing evidence shows hackers now routinely use the technology to turbocharge their search for vulnerabilities, develop new code exploits, and scale phishing campaigns. At the same time, AI firms are building defensive security measures directly into foundation models to keep pace with attackers.

As cybersecurity becomes more automated, corporations will be forced to rapidly adapt as they grapple with the security of their products and systems in the age of AI.

A recent report by Amazon security researchers highlighted the growing sophistication of hackers’ AI use. The researchers wrote that Russian-speaking attackers used multiple commercially available generative AI services to plan, manage, and conduct cyberattacks on organizations with misconfigured firewalls in over 55 countries this January and February.

The attack targeted more than 600 systems protected by FortiGate firewalls and worked by scanning for internet-exposed login pages—these are essentially front doors leading into private company networks—and attempting to access them with commonly reused security credentials. Once inside, they extracted credential databases and targeted backup infrastructure. This activity suggests they may have been planning a ransomware attack.

The researchers report the attack was largely unsuccessful but nonetheless highlighted how much AI can lower the barrier to large-scale attacks. Despite being relative amateurs, the group “achieved an operational scale that would have previously required a significantly larger and more skilled team,” they wrote.

In the most vivid demonstration of AI’s hacking potential, a research prototype created by a New York University researcher known as PromptLock used large language models to create an entirely autonomous ransomware attack.

The malware used AI to generate custom code in real time, scour the target system for sensitive data, and write personalized ransom notes based on what it found. While the tool was only a proof of concept, it highlighted the mounting threat of fully automated malware attacks.

A recent report from security firm CrowdStrike found that AI is also making attackers significantly more nimble. They discovered that average breakout times—the window between when an attacker first breaches a network and when they move into other systems—fell to just 29 minutes in 2025, 65 percent faster than in 2024.

Be Part of the Future

Sign up to receive top stories about groundbreaking technologies and visionary thinkers from SingularityHub.

100% Free. No Spam. Unsubscribe any time.

In November, Anthropic also claimed they had detected a Chinese state-linked group using the company’s Claude Code assistant to conduct a large-scale espionage campaign. The group used jailbreaks—prompts designed to bypass a model’s safety settings—to trick Claude into carrying out the attacks. They also broke the campaign into smaller sub-tasks that looked more innocent.

The company claimed the hackers used the tool to automate between 80 and 90 percent of the attack. “The sheer amount of work performed by the AI would have taken vast amounts of time for a human team,” the company’s researchers wrote in a blog post. “At the peak of its attack, the AI made thousands of requests, often multiple per second—an attack speed that would have been, for human hackers, simply impossible to match.”

But while AI is reshaping the offensive cybersecurity landscape, defenders are deploying the tools too. In February, Anthropic released Claude Code Security, which can scan systems for vulnerabilities and propose fixes automatically. The tool can’t carry out real-time security tasks like detecting and stopping live intrusions, but the news nonetheless sent stocks in traditional cybersecurity firms plummeting, according to Reuters.

Cybersecurity vendors are also embedding AI into their defensive platforms. CrowdStrike recently launched two new AI agents, one designed to analyze malware and suggest how to defend against it and another that actively combs through systems for emerging threats. Similarly, Darktrace has introduced new AI tools designed to automate the detection of suspicious network activity.

But perhaps one of the most promising applications for the technology is using it like a hacker to proactively probe defenses. Aikido Security recently released a new tool that uses agents to simulate cyberattacks on each new piece of software a company creates—a practice known as penetration testing—and automatically identify and fix vulnerabilities.

This could be a powerful tool for defenders, Andreessen Horowitz partner Malika Aubakirova wrote in a blog post. Traditional penetration testing is a labor-intensive process relying on highly skilled experts in short supply. Both factors seriously constrain where and how such testing can be applied.

Whether AI ends up advantaging attackers or defenders will likely depend less on raw model capabilities and more on who adapts fastest. So, it seems the unending game of cat and mouse that’s characterized cybersecurity for decades will continue much the same.

Edd is a freelance science and technology writer based in Bangalore, India. His main areas of interest are engineering, computing, and biology, with a particular focus on the intersections between the three.

Related Articles

A broken gold bust of a person in profile

Autonomous AI Agents Have an Ethics Problem

Adam Schiavi
A hand launches a drone into the sky

Thousands of Everyday Drone Pilots Are Making a Google Street View From Above

Aaron Frank
A dense stack of old, tattered books

Sparks of Genius to Flashes of Idiocy: How to Solve AI’s ‘Jagged Intelligence’ Problem

Vinay Chaudhri
A broken gold bust of a person in profile
Future

Autonomous AI Agents Have an Ethics Problem

Adam Schiavi
A hand launches a drone into the sky
Computing

Thousands of Everyday Drone Pilots Are Making a Google Street View From Above

Aaron Frank
A dense stack of old, tattered books
Artificial Intelligence

Sparks of Genius to Flashes of Idiocy: How to Solve AI’s ‘Jagged Intelligence’ Problem

Vinay Chaudhri

What we’re reading

Be Part of the Future

Sign up to receive top stories about groundbreaking technologies and visionary thinkers from SingularityHub.

100% Free. No Spam. Unsubscribe any time.

SingularityHub chronicles the technological frontier with coverage of the breakthroughs, players, and issues shaping the future.

Follow Us On Social

About

  • About Hub
  • About Singularity

Get in Touch

  • Contact Us
  • Pitch Us
  • Brand Partnerships

Legal

  • Privacy Policy
  • Terms of Use
© 2026 Singularity