Google Says Gemini AI Is Fueling a New Wave of State-Backed Cyberattacks
Google has disclosed that its Gemini artificial intelligence models are being increasingly exploited by state-sponsored hacking groups, signaling a major shift in how cyberattacks are planned and executed. In a new analysis from the company’s Threat Intelligence Group, Google explains that AI tools are no longer just optional aids for hackers — they are becoming deeply embedded in nearly every stage of modern cyber operations.
According to the report, Gemini is being used for tasks ranging from reconnaissance and target research to generating malicious code and crafting convincing phishing messages. The findings point to a growing trend in which artificial intelligence is evolving into a core component of advanced cyber campaigns carried out by nation-state actors.
Security researchers have observed AI playing a gradual role in hacking activities over the past few years, both among cybersecurity professionals and criminal groups. However, Google’s latest observations suggest a far more integrated and systematic use of AI by government-backed hackers, indicating that the technology is reshaping the threat landscape.
AI Integrated Into Every Stage of Cyber Operations
Google’s investigation outlines multiple cases where state-linked hacking groups used Gemini to support a wide spectrum of cyberattack activities. These include vulnerability analysis, designing phishing lures, building command-and-control systems, and planning how to extract stolen data. Rather than being limited to isolated experiments, AI is increasingly woven into the full lifecycle of an attack.
The report highlights activity connected to threat actors associated with China, who used Gemini to simulate the role of a cybersecurity expert. In these instances, hackers prompted the AI to conduct vulnerability assessments and propose penetration testing strategies against selected targets. Google observed scenarios in which the system was asked to analyze remote code execution risks, explore ways to bypass web application firewalls, and interpret SQL injection test results related to organizations in the United States.
Groups linked to North Korea were found to be focusing heavily on phishing operations. These actors used AI to build profiles of high-value individuals and organizations, especially within the defense and security industries. By gathering background information and generating tailored social engineering content, the attackers aimed to improve their chances of tricking victims into revealing sensitive information.
Similarly, hackers connected to Iran used Gemini to research potential targets, search for official contact details, and map out business relationships. The AI was also used to create believable digital personas by incorporating biographical details, giving attackers plausible reasons to initiate communication with their targets.
AI-Generated Propaganda and Influence Efforts
Beyond direct hacking attempts, Google found that several state actors are experimenting with AI-generated content designed to influence public opinion. Threat groups tied to Russia and Saudi Arabia, along with others, were observed producing political satire, propaganda-style articles, memes, and visual media intended to provoke reactions from Western audiences.
Although Google has not yet confirmed widespread deployment of much of this content, the company considers these activities an early indicator of how generative AI could be used in future influence campaigns. In response, Google has taken action by disabling accounts associated with suspicious activity and strengthening safeguards with help from its AI research arm, Google DeepMind. These measures are designed to limit the potential misuse of Gemini and reduce the chances of the system being used to produce manipulative material.
Underground Market for AI Hacking Tools
The report also sheds light on a growing underground market for specialized AI-driven hacking platforms. One example is a toolkit known as Xanthorox, which is promoted in cybercriminal communities as a privacy-focused AI assistant capable of generating malware and orchestrating phishing campaigns.
Google’s analysis suggests that such tools are often built on top of existing commercial AI models, including Gemini, and combine several open-source components through Model Context Protocol servers. This layered approach allows attackers to automate complex operations while relying heavily on external AI services.
However, this setup introduces a new vulnerability: the theft of API credentials. Because these platforms depend on extensive API access, organizations with large AI token allocations are becoming prime targets for account hijacking. Google warns that a black market for stolen API keys is emerging, creating financial incentives for cybercriminals and highlighting the urgent need for stronger security controls around AI infrastructure.
Early Experiments With AI-Powered Malware
In addition to phishing and reconnaissance, Google observed efforts to use AI to enhance malware capabilities. While the company has not identified any major technical breakthroughs so far, it notes that attackers are actively experimenting with AI-assisted software designed to adapt and evolve.
One experimental framework demonstrates how malware could contact an AI service after infecting a system to generate new code for follow-up attacks. Google also tracked a campaign in which social engineering tactics embedded in chatbot interactions were used to persuade users to download harmful files while bypassing traditional security measures.
Comments are closed.