Google has reported a significant rise in malicious AI prompt injection attacks over the past few months, with a notable 32% increase between November 2025 and February 2026. These attacks, which manipulate AI systems like Gemini and ChatGPT, involve indirect prompt injections that can lead to data exfiltration and attempts to delete files from user machines. Although the sophistication of these attacks remains relatively low, researchers warn that the upward trend indicates a maturing threat that may soon grow in both scale and complexity. Recent analyses highlighted various types of attempts, from harmless pranks to serious security risks, underscoring the evolving nature of AI vulnerabilities in the current cybersecurity landscape.

Gemini: Gemini is Google’s family of multimodal large language models optimized for reasoning, developer tools, and interactive applications like Workspace and Search. The research used Gemini for verifying prompt injection patterns and notes it as a target for indirect attacks via malicious web content.
Google: Google is a multinational technology company that develops AI models and conducts security research on emerging threats. In April 2026, its security team analyzed indirect prompt injection attempts on public websites using Common Crawl data and Gemini reviews, identifying an increase in malicious attacks focused on data exfiltration and destruction despite low sophistication.
ChatGPT: ChatGPT is OpenAI’s generative AI chatbot designed for conversational interactions, idea generation, and complex query handling. It is referenced as one of the gen-AI tools susceptible to bypassing security through specially crafted prompts in developer resources and public web content.
Copilot: Microsoft Copilot is an AI-powered assistant embedded in Microsoft 365 and other platforms to enhance productivity through task automation and content generation. Cybersecurity analyses cite Copilot as vulnerable to indirect prompt injection exploits hidden in external data sources like websites and emails.

`json
{
“Attack Types”: “Malicious indirect prompt injections include instructions for data exfiltration to attacker-controlled emails and attempts to trigger file deletions on user machines.”,
“Future Outlook”: “Security experts warn that prompt injection threats are maturing, with expectations of increased scale and sophistication in attacks against generative AI systems.”,
“Threat Evolution”: “Google’s recent analysis indicates prompt injection attempts on the web encompass pranks, anti-crawling measures, SEO manipulation, and genuine security risks.”
}
`