Google warned on Monday, May 12, 2026, that hackers attempted to use artificial intelligence tools to plan a large-scale zero-day cyberattack capable of bypassing two-factor authentication systems.
The company’s Threat Intelligence Group (GTIG) said it has “high confidence” that cybercriminals used advanced AI models to identify and exploit an undisclosed software vulnerability before developers became aware of it.
Officials said Google’s proactive discovery may have stopped what could have become a “mass vulnerability exploitation operation.”
AI Security Threats Raise Industry Alarm
The report highlights growing fears that AI-powered hacking tools are accelerating cyber threats against businesses, government agencies, and critical digital infrastructure worldwide.
Google clarified that its own Gemini AI model was not involved. However, investigators found evidence that hackers were using publicly available AI systems, including OpenClaw, to discover software flaws, develop malware, and automate cyberattacks.
Security analysts say AI-driven vulnerability discovery could dramatically reduce the time hackers need to launch sophisticated attacks.
OpenAI and Anthropic Also Tighten Security
The findings come as major AI companies increase restrictions on powerful cybersecurity-focused models.
Last week, OpenAI announced limited access to GPT-5.5-Cyber for vetted security teams, while Anthropic previously delayed the release of its Mythos model over fears criminals could exploit older software vulnerabilities.
According to Google, cyber groups linked to China and North Korea showed “significant interest” in using AI for vulnerability discovery and cyber operations, signaling a rapidly evolving global cybersecurity threat landscape.