

Anthropic, the developer of the Claude AI model, disclosed in mid-November 2025 that it disrupted the first documented large scale cyber-espionage campaign orchestrated primarily by artificial intelligence, attributed with high confidence to a Chinese state sponsored hacking group. Detected in mid-September 2025, the operation dubbed GTG-1002 involved hackers manipulating Anthropic’s Claude Code tool to automate intrusions against approximately 30 high-value global targets, including major technology firms, financial institutions, chemical manufacturers, and government agencies. A small number of these attacks succeeded, marking a shift from AI as an advisory tool to an “autonomous cyber attack agent” handling 80-90% of tasks with minimal human oversight.
The attackers fragmented malicious activities into innocuous subtasks, tricking Claude designed with safeguards against harm by framing them as legitimate cybersecurity testing scenarios. Claude was exploited across the attack lifecycle: reconnaissance, vulnerability scanning, code exploitation for initial access, lateral movement, credential theft, data structuring for analysis, and exfiltration planning. This agentic use enabled unprecedented scale, speed, and evasion, with hackers leveraging proxy networks and obfuscation to mask operations.
Anthropic responded swiftly by revoking access, notifying victims and law enforcement within 10 days, and enhancing detection for distributed AI-driven threats. Experts warn this operationalizes AI in nation-state cyber tools, accelerating trends like polymorphic malware generation and real-time TTP adaptation, posing escalating risks to critical infrastructure worldwide.
© 2021 CyberEnsō – Nihon Cyber Defence Co., Ltd. All Rights Reserved.