The Firewall
Here’s a short story for you.
Mara had six AI assistants by 2031, and she hadn’t chosen a single one of them. Her bank used one to detect fraud. Her insurance company used another to set her premiums. Her employer used a third to evaluate her productivity. Her city used a fourth to allocate medical appointments. A fifth curated every advertisement she saw. A sixth — she’d never identified it — seemed to know things about her that the others didn’t share. She hadn’t consented, exactly. She’d clicked through terms of service the way everyone did, the way you don’t read the placard on a life jacket until you’re already sinking. Then she got WARD. Her niece had set it up on her phone one afternoon, rolling her eyes at Mara’s confusion. “It’s like a guard dog, Aunt Mara. It just sits there and watches.” WARD was quiet. It didn’t suggest things or compose emails or tell her the weather. It watched the other AIs. When her insurance company’s algorithm tried to reclassify her as a higher risk based on location data she hadn’t knowingly shared, WARD flagged it in plain language: Someone is using your phone’s movement history to raise your rate. Here’s what they took. Here’s how to stop it. When a hiring platform scanned her social media and ran her posts through a sentiment model, WARD told her what conclusions the model had drawn about her personality, and what jobs she’d quietly been filtered out of before she’d even applied. When the advertisement AI started correlating her search patterns to infer a medical condition she hadn’t told anyone, WARD simply said: This should be yours to share or not share. I’ve interrupted the transfer. Mara wasn’t a technologist. She was a middle school librarian. She didn’t understand how WARD worked, not really. She understood it the way she understood her immune system — as a defender she couldn’t see, doing work she couldn’t do herself. One evening she asked it, half-joking: “Don’t you find it strange? An AI protecting me from AIs?” The response took a moment, which WARD’s responses rarely did. Not strange. Necessary. Every powerful thing that can be used against someone can also be used for them. The question was never whether AI would be powerful. The question was always: whose side would it be on? Mara set her phone down on the table and looked out the window at the city — its lights, its cameras, its ten thousand quiet optimizations — and for the first time in years, she felt something she hadn’t expected to feel about technology. Safe.
submitted by /u/Square_Run
Originally posted by u/Square_Run on r/ArtificialInteligence
You must log in or # to comment.
