Hey everyone, I’m Aiden, a CS student at Trinity College, and I’ve been researching the intersection of AI and cybersecurity… specifically the surge in medical data breaches over the past few years. One stat that really stuck with me: according to the HIPAA Journal, more than 259 million individuals had their medical information exposed in 2024. With the rise of “Harvest Now, Decrypt Later” (HNDL) strategies: where attackers steal encrypted data today, hoping that future quantum capabilities will let them decrypt it, and the long‑term privacy risks are becoming serious. For anyone unfamiliar, HNDL basically means: steal encrypted data now, wait for stronger decryption methods later. It’s a slow‑burn threat, but a very real one as quantum computing advances. I’m working on a project exploring how AI‑driven systems could help protect sensitive healthcare data against these kinds of attacks. But I’ve hit a question that’s more human than technical: Would people actually trust AI to safeguard their most personal information? To get a sense of how the community feels, I put together a short, anonymous survey (takes about 2 minutes): https://ai.aidentejada.com/survey Submission Statement: This post discusses the growing threat of quantum‑era attacks on encrypted medical data and explores whether AI‑based security systems can be trusted to protect highly sensitive information. I’m gathering community sentiment to inform research on AI‑driven defenses against HNDL‑style threats, which directly relates to AI safety, cybersecurity, and the future of privacy. I’d really appreciate any honest feedback. And if anyone wants to dig into the technical side—post‑quantum cryptography, anomaly detection, AI‑assisted threat modeling—I’m more than happy to discuss in the comments. Thanks for taking the time to read this! submitted by /u/iBoomer69
Originally posted by u/iBoomer69 on r/ArtificialInteligence
