According to the Google Cloud Blog, agentic AI in cyberattacks is becoming increasingly relevant in practice: China-aligned actors such as APT31 are testing agentic workflows for automated reconnaissance, while model-extraction/distillation attacks against AI models are simultaneously growing in scale—creating an additional attack surface for organizations beyond classic IT security, particularly around AI APIs and model protection.
Agentic AI in cyberattacks is getting closer
State-sponsored attackers are increasingly exploring how to integrate generative AI into real operational chains—not merely as a “writing aid” for phishing, but as a semi-autonomous engine for intelligence gathering, tooling, and scaling. In its latest post in the Cloud CISO Perspectives series, Google warns that the experimentation phase with agentic capabilities is especially notable, as it enables actors to automate reconnaissance and scale operations more quickly. As an example, Google points to the China-aligned APT31 group, which is testing agentic approaches for automated reconnaissance.
In parallel, analysts are observing a second development that is less visible but business-critical for providers and operators of AI services: model-extraction attacks (also referred to as “distillation”) are increasing. In these attacks, adversaries attempt to replicate the logic of a powerful “teacher” model through large-scale querying—often with the goal of training their own “student” model at significantly lower cost and reusing it without the original system’s safeguards.
What Google says about APT31 and agentic reconnaissance
The core warning in Google’s post is not about a single “exploit,” but rather a tactical shift: agentic workflows can turn reconnaissance from a manual, iterative process into an automated pipeline. Google describes these agentic experiments as particularly concerning because they not only accelerate reconnaissance but also systematize it: an agent can perform tasks such as target profiling, researching publicly known vulnerabilities, deriving test plans, or preparing social-engineering interactions at scale—dramatically reducing the “cost per target.”
Important: Google frames this as an observation of “experimentation”—i.e., a stage in which actors are testing and refining methods, not necessarily evidence of a fully autonomous attack already running end-to-end. However, this intermediate phase is often the moment when defenders can learn the most in practice: What artifacts are produced? Which prompt and API patterns are typical? And at which points does a human remain in the loop?
Model extraction as a new attack surface for AI providers
While agentic reconnaissance primarily affects the operational side, model-extraction and distillation attacks pursue a different target: the AI model itself. Google describes model extraction as an abuse of knowledge distillation to transfer information and capabilities from a model onto a system controlled by the attacker. From the provider perspective, this is primarily IP theft—and therefore a business risk, not just a technical issue. Accordingly, Google recommends that “AI-as-a-Service” providers monitor API access specifically for extraction and distillation patterns.
BornCity further contextualizes the scale of such campaigns and—citing Google’s threat-intelligence insights—describes a case involving more than 100,000 prompts aimed at extracting the internal logic and/or reasoning capabilities of Gemini models. The key takeaway:
For end users, this is not necessarily the immediate risk—but it is for AI platform operators, because the API itself can become the channel for theft.
Agentic AI in cyberattacks – what this means for SOCs and AI teams
From a defender’s perspective, these two trends hit different owners—and that is exactly what makes them dangerous:
- Security operations should expect reconnaissance to become faster, broader, and more personalized—not necessarily more “magical,” but more efficient. This increases the volume of pre-attack activity (profiling, pretexting, scripting) and can dilute classic early-warning signals.
- AI/platform teams must treat API and abuse telemetry as a security control, not merely a billing or performance metric. If distillation happens through legitimate interfaces, anomaly detection becomes the deciding factor between “normal” usage and “extraction.”
Google also emphasizes that model extraction currently affects “frontier labs” in particular, but is likely to reach other providers and organizations as models become more exposed—especially once companies connect their own models to customers or the public.
AI doesn’t necessarily create new capabilities – but it does enable new scale
BornCity summarizes the thrust as follows: so far, AI does not necessarily give attackers “radically new” capabilities, but it makes existing tactics more frequent, more sophisticated, and more productive—shifting the arms race from “can AI hack?” to “how much does AI reduce the cost per attack step?”
In practice, this means: anyone operating AI as a platform or integrating it into products must expand security controls to include AI-specific abuse patterns. And anyone defending classic enterprise IT should expect reconnaissance and social engineering to become even more industrialized—especially once agentic workflows prove reliable in the early phases of a campaign.




