AI has become a lucrative and low-risk operational asset for cybercriminals. Research showed that artificial intelligence (AI) tools now power 80 percent of ransomware attacks, which is only scratching the larger attack surface and ecosystem. Threat actors are executing attacks with APT-level sophistication through vibe hacking and no-code malware to exfiltrate data at unprecedented scale.
This marks a new paradigm shift in how organizations and security leaders must model threats and design defense strategies to anticipate AI-driven adversarial attacks. This shift also affects teams responsible for product security orchestration and those protecting software-defined products and cyber-physical systems across connected environments.
In this article, we explore how global threat actors are leveraging AI to fuel a new wave of cyber attacks and why organizations must evolve from traditional security approaches to AI-aware defensive strategies.
THE SHIFT: FROM AI ASSISTANT TO AI OPERATOR
A survey of developers who use AI in their work found that 42 percent reported that at least half of their codebase is AI-generated. AI tools accelerate software development processes and automate tasks across multiple business functions.
But AI can also introduce new security risks and be manipulated to carry out sophisticated attacks at a velocity and scale previously unseen. Research showed that LLMs have the potential to generate 10,000 malware variants, evading detection in 88 percent of cases. Malicious techniques such as LLM-based obfuscation, JavaScript code injection, and automated phishing email generation enable attackers to weaponize AI models for financial or geopolitical gain.
With a lower barrier to entry, AI provides non-technical threat actors with the ability to deploy ransomware and commit fraud without any coding experience. This acceleration creates significant challenges for teams responsible for security automation and vulnerability lifecycle management, where attack velocity increasingly exceeds manual defensive processes.
Traditional security mechanisms might overlook these attacks because they rely on natural-language prompts that instruct AI tools to generate malicious code or manipulate systems. This allows threat actors to execute sophisticated operations without triggering conventional behavior-based defenses.
THREAT ACTORS LEVERAGING AI TO AMPLIFY ATTACK EFFICACY
Research collected from Anthropic’s recent Threat Intelligence Report highlighted several examples of global threat actors manipulating AI coding agents, known as vibe hacking, to distribute malware campaigns and perform data extortion.
These emerging patterns affect organizations across sectors, including automotive, medical device, and industrial IoT environments, where AI-assisted attacks increasingly intersect with software supply chain security requirements.
Vibe Hacking and AI-Run Extortion (GTG-2002)
A single threat actor attacked 17 organizations in one month by leveraging Claude’s code execution. The actor supplied Claude Code with their preferred operational TTPs in a file named CLAUDE.md, which served as a reference guide for how Claude Code should respond to prompts.
During reconnaissance, Claude Code scanned networks and identified critical systems, allowing the actor to extract multiple credential sets. Using these credentials, the threat actor compromised healthcare and government systems and issued ransom demands of up to 500,000 USD in Bitcoin.
This demonstrated that a single operator can now match the impact of an entire cybercriminal team.
North Korean IT Remote Worker Fraud
North Korean threat actors leveraged Claude to pose as remote workers in a scheme designed to pass job interviews and maintain full-time employment at Fortune 500 companies. Operators manipulated Claude to generate personas, prepare interview responses, tailor resumes, assist with coding tasks, and even participate in team communication.
Individuals without coding experience were able to maintain multiple positions simultaneously and operate undetected.
AI-Powered Ransomware-as-a-Service (RaaS)
A UK-based threat actor (GTG-5004) used Claude to develop and distribute ransomware as a service with advanced evasion capabilities, including anti-EDR techniques, Windows internals exploitation, anti-debugging, and ChaCha20 encryption. RaaS kits were marketed on dark web forums using tiered pricing models.

The AI dependency shift has made it possible for non-technical threat actors to create and distribute ransomware with advanced evasion techniques, lowering the barrier to advanced cyber attacks once again.
Chinese APT Campaign Targeting Vietnamese Critical Infrastructure
A sophisticated Chinese threat actor integrated Claude across 12 of the 14 MITRE ATT&CK tactics, using it as an operational consultant to support a campaign targeting Vietnam’s critical infrastructure. The attack lifecycle spanned nine months and included Python scanning tools, privilege escalation exploits, file upload fuzzing, proxy chain construction, and lateral movement planning.
Additional campaigns included a Russian-speaking developer using Claude to refine malware evasion tactics and a Spanish-speaking threat actor leveraging Claude Code for stolen credit card operations. These examples illustrate how lower-skilled actors can now operate with precision and stealth once reserved for nation-state operations.
THE AI DEFENSE GAP
The report’s findings demonstrate a widening AI defense gap, where traditional security measures were not designed to detect or respond to AI-assisted threats. Cybercriminals with no technical expertise can now launch sophisticated attacks at record volume.
The modern enterprise is a high-risk environment for AI misuse. Gartner predicts that by 2028, 25 percent of enterprise breaches will be traced back to AI agent abuse from external and malicious internal actors.
Detection and response models must assume that AI is in the loop and able to influence adversarial behavior. Traditional security controls must evolve to analyze interactions involving LLMs such as Claude and ChatGPT, identify evasion patterns, detect prompt manipulation attempts, and surface indicators of compromise embedded within AI-generated or AI-manipulated content. This need is especially clear in environments that depend on product security orchestration and coordinated defenses across distributed systems.
AI red teaming is an effective strategy for uncovering vulnerabilities in LLMs and AI systems. However, it can be costly and resource-intensive, and without proper guardrails may expose sensitive data or produce unethical bias. These challenges reinforce the requirement for structured, automated approaches integrated into broader security orchestration platforms.
C2A SECURITY’S RECOMMENDED ACTIONS FOR AI-DRIVEN THREATS
The cybercrime bar has risen significantly, and C2A Security helps organizations keep pace with the evolving AI threat landscape.
C2A Security enables organizations to embed AI threat scenarios into their security posture and close the traditional defensive AI gap in several ways.
AI Threat Modeling
Integrate AI-enabled attack scenarios into the TARA framework. EVSec treats AI as an active operator in the kill chain rather than merely an enabler and aligns with MITRE ATLAS to catalog real-world adversarial TTPs targeting AI systems. This approach supports teams working with software-defined products and cyber-physical systems.
Early Detection and Classification
Develop classifiers for AI-driven activity such as evasion code patterns and ransom note generation. Monitor for AI fingerprints, including CLAUDE.md-style configuration artifacts, across the ecosystem. These capabilities strengthen security automation and improve vulnerability lifecycle management by detecting AI-generated indicators earlier.
Cross-Industry Collaboration
Expand intelligence sharing between AI labs, OEMs, ISPs, and regulators. Leverage domain frameworks such as Auto-ISAC, ENISA, and CISA to integrate AI misuse into strategic defense playbooks. This collaboration is relevant to automotive cybersecurity, medical device cybersecurity, and industrial IoT security.
Schedule a demo
Schedule a demo to learn how C2A Security can strengthen defensive AI strategies across product security orchestration workflows and safeguard complex, software-defined products.


