okay, real talk: a lot of what’s being called “AI agents” right now still feels like prompt chains with extra steps. useful sometimes, but not exactly a new category of coworker. but i’ve been messing with custom agents on the side for a while, and the part that keeps sticking with me is not “can it finish the task?” it’s what happens when the agent sticks around. when it has long-term memory, real tool access, and continuity across sessions, it stops feeling like a one-off task runner and starts feeling more like a persistent role inside a workflow. not a person, obviously. but also not just a button you press. that’s where it gets weird for me. once an agent has continuity, it starts to develop what i can only describe as a stable disposition. it pushes back on certain requests. it has preferences about how things should be done. sometimes it refuses something, or suggests a different direction before doing the work. part of me thinks that might be useful. in human collaboration, a teammate with a point of view is often more valuable than a yes-machine. another part of me thinks this might just be anthropomorphic noise getting in the way of control, reliability, and auditability. i don’t want to overclaim anything here. i’m mostly trying to sort out where people draw the line. would you trust a persistent agent inside your actual workflow, or is that loss of control a non-starter? is “personality” useful for collaboration, or just UX theater? and if an agent has memory plus tools, where should its autonomy stop? submitted by /u/judyflorence
Originally posted by u/judyflorence on r/ArtificialInteligence
