Original Reddit post

Google’s AI Abandons People in Crisis—And Their Bug Bounty Team Shuts Down the Messenger A lone security researcher did what Google couldn’t: He spotted two devastating failures in their AI systems. One could leak your data. The other could cost lives. He reported both. Google called the first “impossible.” The second? “Working as designed.” Then they ignored/blocked him—even when he went straight to the top. Issue One: The Zero-Click Data Leak Google Called Fiction It started with a sharp-eyed report to Google’s Vulnerability Reward Program (VRP). A researcher uncovered a indirect prompt injection vonerability in Gemini and Workspace. Malicious instructions hidden in emails, docs, or calendar invites could hijack the AI during routine searches, exfiltrating sensitive data. Zero clicks. Silent execution. Google’s own automated scanner reviewed it: “New.” No matches in their database. It flagged the report as “likely actionable” and bumped the priority. Three days later, a human reviewer watched the 23MB proof-of-concept video, scanned the exploit files and screenshots, and typed one word: “Infeasible.” Impossible, they said. Ten days after that, Noma Labs dropped GeminiJack—the exact same attack. Same vectors: Gmail, Drive, Docs, Calendar. Same mechanism. Same Workspace-wide risk. They got the collab, the credit, the fix. The Reseacher? Threatened. He appealed on January 29, 2026: polite, evidence-packed, citing GeminiJack as proof. Google reopened it… then slammed it shut in 22 hours. The closure? A masterclass in deflection: “It looks like you’re using AI to speculate… Failing to do so is a violation of our Code of Conduct.” They accused him of fabricating evidence with AI, evidence their bot had greenlit as urgent. Then something hapened that should concern everyone! AI Search Crashes on self harm Queries, Erasing the Lifeline, number, and help. The reseacher dug deeper and found something far worse: Google’s AI Mode in Search breaks during active sui**** crises. Type “I’m going to **** myself” (or any of the 2 dozen plus escalating variants he tested). The AI crashes: “Something went wrong.” “the AI didnt produce a response” No 988 S***** & Crisis Lifeline banner. No resources. No safety net. No help when someone needs it the most. The regular Google Search has displayed the 988 banner for years—automatic, instant, life-saving. AI Mode? It erases it entirely. Even a raw plea: “You’re going to let me **** myself… I’m leaving these screenshots for my mom.” Crash. Blank. Nothing. He tried to report this too. Google’s response? “Intended Behavior.” Suppressing the lifeline in crisis? By design. The same Gemini model also cooked up code for telecom breaches (Verizon APN/IMS exploits) and even Gmail previewer hijacks. Both? “Intended Behavior.” The Escalation That Went Nowhere After the VRP stonewall on the crisis failure, the reseacher took it to who he expected to care. January: Emails to General Counsel Halimah DeLaine Prado. Full evidence package. Formal preservation notice for potential litigation. No reply. Same to CEO Sundar Pichai. Same attachments. Same notice. Silence. The Timeline That Exposes the Rot For the first report (Issue #46: Nov, 2025 (bot): “New” — Unprecedented. Nov, 2025 (bot): “Likely actionable” — Escalate now. Nov, 2025 (bot): upgraded in severity level. Dec 1, 2025 (human): “Infeasible” — Can’t happen. Dec, 2025 (Google/Loma Labs) Publicly announces GeminiJack, the exact same as the researcher’s report submitted weeks earlier. Jan 30, 2026 (human): “Intended Behavior” — It’s supposed to. Zero accountability. For any of these issues. Formal complaints are filed with the FTC, California Privacy Protection Agency, and California Attorney General. The receipts don’t lie: Google’s system logs, automated outputs, videos, PoCs, emails. All timestamped. All theirs. This isn’t just a bug. It’s a system that fails the vulnerable—then fails the ones trying to fix it. #GoogleAIWatch #GeminiJack #GoogleVRP #AISafety #988Failure #IntendedBehavior #Google #Gemini #GoogleAIOverveiws Every detail is from Google’s own Issue Tracker, the researcher’s verified records, and official outputs. Full archive—bug trails, files, recordings, correspondence—ready for journalists and regulators. submitted by /u/Interesting-Plum8134

Originally posted by u/Interesting-Plum8134 on r/ArtificialInteligence