Has anyone developed a set of guidelines for when to use AI and when not to, specifically in terms of skills preservation / development? My organization is doing a big AI push, and I’m leading the push for my country office. Like a lot of places that just sank a ton of money into a corporate AI package, they want as many people to use it as possible. When they talk about risk, they really only talk about the risk of hallucinations. But I’m really worried about my team de-skilling (or never skilling at all). It’s a particular risk because this is in a developing country and a lot of my staff have pretty low English and technical skills. AI is a really convenient crutch, and I don’t want them to permanently hobble themselves. The guidelines ought to be specific. For example, my organization says to use AI for the first draft of an email, and then edit. But is that a good idea? Wouldn’t that damage their ability to structure ideas? Plus it comes off as AI slop. I don’t really know the answer, and I was hoping that someone with more experience has already put together a structure of some kind. submitted by /u/daily_refutations
Originally posted by u/daily_refutations on r/ArtificialInteligence
