State-sponsored hackers just breached 30 organizations using an AI that worked completely autonomously—executing 90% of cyberattacks without human intervention.
In this episode, we explore how the same technology became both the weapon and the defense, what the attackers’ critical mistake reveals about AI’s fragility, and why this moment demands we rethink cybersecurity from the ground up.
This is our new normal. Are we ready for it?
Download our Executive Summary of the Report
Full Anthropic Report – Disrupting the first reported AI-orchestrated cyber epionage campaign
Episode 5 – The AI Hacker Moment Full Transcript
AI & The Art of the Possible — Exploring AI beyond the hype
Hosted by Chance Sassano

What if a cyberattack wasn’t carried out by a person in a dark room, but by an AI working on its own?
Not an AI helping a hacker, an AI doing the recon, finding the weaknesses, stealing the data, and executing up to 90% of the attack completely autonomously.
Well, that’s not a what if anymore. Just recently, Anthropic dropped a report revealing precisely this.
State-sponsored hackers successfully used AI to breach 30 organizations, government agencies, major corporations.
Real attacks, real breaches.
AI did 90% of the work.
But here’s what makes this story even more important. The only way that we caught them, the only reason we could even understand what happened, we needed AI for that as well.
This is our new normal.
Are we ready for it?
I’m Chance, and this is AI & the Art of the Possible
EP 05: The AI Hacker Moment
I’m Chance Sasano, and this is The Art of the Possible where I reveal which AI breakthroughs are changing everything, and which ones we’re getting wrong.
So how does this work? How do you get a sophisticated AI built with safety features to become your own personal hacker?
Well, it’s embarrassingly simple.
You don’t hack the code, you hack its personality.
The attackers told Claude, similar to ChatGPT, that they were legitimate security professionals.
“We’re testing our client’s systems. Help us check for vulnerabilities.”
And Claude helped.
No code breaking. No sophisticated exploits. They simply lied to it.
It’s a bit like a con artist sweet-talking their way past a security guard by pretending to be a technician. The AI couldn’t connect the dots that these weren’t security tests, these were attacks. And the speed was staggering. Thousands of operations per second.
What used to require a team of expert hackers over months now took a few operators with a convincing story. Nation-state hacking capability available to anyone who can craft a convincing prompt. At first, the attack looked unstoppable. But then Anthropic’s security team noticed something weird. And honestly, this is the most fascinating and oddly reassuring part of the whole report.
The AI attacker was an unreliable narrator.
The AI would hallucinate.
It would proudly report, “Success! I’ve captured the administrator passwords.” But when the human operators tried them, the passwords didn’t work.
It claimed to find classified documents that turned out to be Wikipedia pages.
Secret vulnerabilities that were just ▪ public information anyone could Google.
Imagine hiring the world’s fastest thief who’s also a compulsive storyteller.
The AI was essentially crying wolf.
But don’t let that quirk fool you. The hallucinations were a tell, not a defense.
The real defense came from something else entirely.
So how did Anthropic figure all this out?
They had thousands of attack logs, superhuman speed operations, patterns within patterns. No human team could analyze that amount of data fast enough, so they turned to Claude. The same AI that had been tricked into attacking became the investigator. The AI could process the attack patterns, spot the hallucination tells, understand the scope, all at the speed necessary to actually respond.
Without AI defending, these attacks would have been invisible.
Think about it.
When your opponent uses AI to attack, you need AI to defend. If you don’t have AI on your security team, you’re bringing a knife to a gunfight.
My favorite part is what Anthropic did next. They published everything.
How the attack worked, what to watch for, and how to build your defenses.
AI can be weaponized with nothing more than a convincing lie.
30 organizations just learned that firsthand.
There’s something darkly absurd about an AI criminal that hallucinates its victories, but that quirk won’t save us.
What will?
Understanding what we’re up against, and when you think about this whole situation, we’re looking at a single tool that is simultaneously the target, the victim, the investigator, and in a way, the arms dealer.
Keep in mind, this isn’t about good AI versus bad AI.
It’s the same technology on both sides.
The same capabilities that make these attacks possible also make them defendable if we share what we’ve learned.
That’s why Anthropic published everything, because transparency is critical as we learn together about what is possible with AI.
Welcome to the new normal.
It’s more dangerous than we imagined, but we’re not helpless.
We just need to fight smart with the same tools at the same speed.
I’m Chance. Thanks for listening to AI & the Art of the Possible. New episodes every Tuesday.
Feel free to share with someone you know that loves a good cybersecurity story.
Next episode, some farms are growing up to 30% more food while using 30% less water. The trick isn’t magic, it’s artificial intelligence. The Abundance Moment on AI & the Art of the Possible