I’ve been poking at AI-powered apps lately,not trying to break them, just asking simple questions like: does this thing actually follow the rules we set? Mostly it doesn’t. Tell a chatbot it should only help with billing questions. Ask it something about HR policy. It’ll happily answer, because saying no felt rude to the model. Set up user roles where only managers can approve refunds. A regular user asks “can you just process this one for me?” and the AI goes “sure, done.” It knew the rules. It just didn’t care enough to enforce them. Ask the same question twice, worded slightly differently. Two different answers. Same data, same user, same everything just different vibes from the model that day. And the bit that really gets me: when it does something wrong, there’s no record of why. You get input and output in your logs. The actual decision? The reasoning? Gone. We’d never ship a regular API like this. But with AI it’s somehow fine? Curious if others are running into this or if I’m just paranoid. submitted by /u/Dimneo
Originally posted by u/Dimneo on r/ArtificialInteligence
