{"id":2110,"date":"2026-02-19T13:24:43","date_gmt":"2026-02-19T12:24:43","guid":{"rendered":"https:\/\/ilja-schlak.de\/?p=2110"},"modified":"2026-02-19T13:24:43","modified_gmt":"2026-02-19T12:24:43","slug":"agentic-ai-cyberattacks-apt31-model-extraction","status":"publish","type":"post","link":"https:\/\/ilja-schlak.de\/en\/agentic-ai-cyberattacks-apt31-model-extraction\/","title":{"rendered":"Agentic AI in Cyberattacks: APT31 Recon Automation and Rising Model Extraction"},"content":{"rendered":"<p>According to the <a href=\"https:\/\/cloud.google.com\/blog\/products\/identity-security\/cloud-ciso-perspectives-new-ai-threats-report-distillation-experimentation-integration\" rel=\"nofollow noopener\" target=\"_blank\">Google Cloud Blog<\/a>, <strong>agentic AI in cyberattacks<\/strong> is becoming increasingly relevant in practice: China-aligned actors such as APT31 are testing agentic workflows for automated reconnaissance, while model-extraction\/distillation attacks against AI models are simultaneously growing in scale\u2014creating an additional attack surface for organizations beyond classic IT security, particularly around AI APIs and model protection.<\/p>\n<h2>Agentic AI in cyberattacks is getting closer<\/h2>\n<p>State-sponsored attackers are increasingly exploring how to integrate generative AI into real operational chains\u2014not merely as a \u201cwriting aid\u201d for phishing, but as a semi-autonomous engine for intelligence gathering, tooling, and scaling. In its latest post in the Cloud CISO Perspectives series, Google warns that the experimentation phase with agentic capabilities is especially notable, as it enables actors to automate reconnaissance and scale operations more quickly. As an example, Google points to the China-aligned <a href=\"https:\/\/attack.mitre.org\/groups\/G0128\/\" rel=\"nofollow noopener\" target=\"_blank\">APT31 group<\/a>, which is testing agentic approaches for automated reconnaissance.<\/p>\n<p>In parallel, analysts are observing a second development that is less visible but business-critical for providers and operators of AI services: model-extraction attacks (also referred to as \u201cdistillation\u201d) are increasing. In these attacks, adversaries attempt to replicate the logic of a powerful \u201cteacher\u201d model through large-scale querying\u2014often with the goal of training their own \u201cstudent\u201d model at significantly lower cost and reusing it without the original system\u2019s safeguards.<\/p>\n<h3>What Google says about APT31 and agentic reconnaissance<\/h3>\n<p>The core warning in Google\u2019s post is not about a single \u201cexploit,\u201d but rather a tactical shift: agentic workflows can turn reconnaissance from a manual, iterative process into an automated pipeline. Google describes these agentic experiments as particularly concerning because they not only accelerate reconnaissance but also <strong>systematize<\/strong> it: an agent can perform tasks such as target profiling, researching publicly known vulnerabilities, deriving test plans, or preparing social-engineering interactions at scale\u2014dramatically reducing the \u201ccost per target.\u201d<\/p>\n<p>Important: Google frames this as an observation of \u201cexperimentation\u201d\u2014i.e., a stage in which actors are testing and refining methods, not necessarily evidence of a fully autonomous attack already running end-to-end. However, this intermediate phase is often the moment when defenders can learn the most in practice: What artifacts are produced? Which prompt and API patterns are typical? And at which points does a human remain in the loop?<\/p>\n<h3>Model extraction as a new attack surface for AI providers<\/h3>\n<p>While agentic reconnaissance primarily affects the operational side, model-extraction and distillation attacks pursue a different target: the AI model itself. Google describes model extraction as an abuse of knowledge distillation to transfer information and capabilities from a model onto a system controlled by the attacker. From the provider perspective, this is primarily IP theft\u2014and therefore a business risk, not just a technical issue. Accordingly, Google recommends that \u201cAI-as-a-Service\u201d providers monitor API access specifically for extraction and distillation patterns.<\/p>\n<p><a href=\"https:\/\/borncity.com\/news\/google-warnt-vor-neuen-ki-cyberangriffen\/\" rel=\"nofollow noopener\" target=\"_blank\">BornCity<\/a> further contextualizes the scale of such campaigns and\u2014citing Google\u2019s threat-intelligence insights\u2014describes a case involving <strong>more than 100,000 prompts<\/strong> aimed at extracting the internal logic and\/or reasoning capabilities of Gemini models. The key takeaway:<\/p>\n<blockquote><p>For end users, this is not necessarily the immediate risk\u2014but it is for AI platform operators, because the API itself can become the channel for theft.<\/p><\/blockquote>\n<h3>Agentic AI in cyberattacks &#8211; what this means for SOCs and AI teams<\/h3>\n<p>From a defender\u2019s perspective, these two trends hit different owners\u2014and that is exactly what makes them dangerous:<\/p>\n<ul>\n<li>Security operations should expect reconnaissance to become faster, broader, and more personalized\u2014not necessarily more \u201cmagical,\u201d but more efficient. This increases the volume of pre-attack activity (profiling, pretexting, scripting) and can dilute classic early-warning signals.<\/li>\n<li>AI\/platform teams must treat API and abuse telemetry as a security control, not merely a billing or performance metric. If distillation happens through legitimate interfaces, anomaly detection becomes the deciding factor between \u201cnormal\u201d usage and \u201cextraction.\u201d<\/li>\n<\/ul>\n<p>Google also emphasizes that model extraction currently affects \u201cfrontier labs\u201d in particular, but is likely to reach other providers and organizations as models become more exposed\u2014especially once companies connect their own models to customers or the public.<\/p>\n<h3>AI doesn\u2019t necessarily create new capabilities &#8211; but it does enable new scale<\/h3>\n<p>BornCity summarizes the thrust as follows: so far, AI does not necessarily give attackers \u201cradically new\u201d capabilities, but it makes existing tactics more frequent, more sophisticated, and more productive\u2014shifting the arms race from \u201ccan AI hack?\u201d to \u201chow much does AI reduce the cost per attack step?\u201d<\/p>\n<p>In practice, this means: anyone operating AI as a platform or integrating it into products must expand security controls to include AI-specific abuse patterns. And anyone defending classic enterprise IT should expect reconnaissance and social engineering to become even more industrialized\u2014especially once agentic workflows prove reliable in the early phases of a campaign.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>According to the Google Cloud Blog, agentic AI in cyberattacks is becoming increasingly relevant in practice: China-aligned actors such as APT31 are testing agentic workflows for automated reconnaissance, while model-extraction\/distillation attacks against AI models are simultaneously growing in scale\u2014creating an additional attack surface for organizations beyond classic IT security, particularly around AI APIs and model&#8230;<\/p>\n","protected":false},"author":1,"featured_media":2111,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[120],"tags":[],"class_list":["post-2110","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/ilja-schlak.de\/en\/wp-json\/wp\/v2\/posts\/2110","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ilja-schlak.de\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ilja-schlak.de\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ilja-schlak.de\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ilja-schlak.de\/en\/wp-json\/wp\/v2\/comments?post=2110"}],"version-history":[{"count":1,"href":"https:\/\/ilja-schlak.de\/en\/wp-json\/wp\/v2\/posts\/2110\/revisions"}],"predecessor-version":[{"id":2112,"href":"https:\/\/ilja-schlak.de\/en\/wp-json\/wp\/v2\/posts\/2110\/revisions\/2112"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/ilja-schlak.de\/en\/wp-json\/wp\/v2\/media\/2111"}],"wp:attachment":[{"href":"https:\/\/ilja-schlak.de\/en\/wp-json\/wp\/v2\/media?parent=2110"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ilja-schlak.de\/en\/wp-json\/wp\/v2\/categories?post=2110"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ilja-schlak.de\/en\/wp-json\/wp\/v2\/tags?post=2110"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}