Google’s Threat Intelligence Group warns that state-sponsored hackers are increasingly using AI models, including Google’s own Gemini, to accelerate cyberattacks. The latest report details a rise in model extraction attempts and the use of AI for reconnaissance, target profiling, and creating sophisticated phishing lures. These tools allow threat actors to generate hyper-personalized scams and automate aspects of malware development, shifting the digital threat landscape.
A new report from Google’s Threat Intelligence Group highlights increasing use of artificial intelligence by state-backed hackers. The group identified a rise in model extraction attempts, where actors query an AI model to replicate its internal logic.
Government-sponsored threat actors from several nations are employing large language models for technical research and target profiling. They are also using tools like Gemini to generate nuanced phishing lures at scale, moving beyond manual methods.
“This activity underscores a shift toward AI-augmented phishing enablement, where the speed and accuracy of LLMS can bypass the manual labor traditionally required for victim profiling,” the report states. AI allows for hyper-personalized lures that mirror a target organization’s professional tone and local language.
The report notes growing interest in agentic AI, which can act autonomously to support tasks like malware development. While malicious use of AI has increased, Google states there are no breakthrough capabilities yet, just a rise in tool usage and associated risks.
Google says it is combatting these threats through frequent intelligence reports and proactive security teams. The company is also implementing measures within Gemini to prevent its malicious use, aiming to identify and remove harmful functions preemptively.

