Original Reddit post

Hey, I’m in a small b2b marketing team. For the past month I’ve been trying to set up agents in Copilot Studio to support our marketing, sales and customer success teams. I’m focused on Copilot rather than LLMs like ChatGPT or Claude simply because we’ve already got licenses, and we already use 365 across the business - so its native connection to our information seems like a big advantage. However I’m very worried that I’m beating a dead horse. My primary goal is to help our teams save time. I want to develop 3 agents which act as marketing, sales and CS experts. Each agent would then be able to perform specialist - for example analyzing ad metrics, drafting sales email copy, critiquing a CS call transcript - as well as providing general advice, acting as an expert in its respective field, e.g., sales. But after a month of experimenting I’ve still not achieved this goal. I’ve tried two approaches, with dozens of variations: Approach #1 - Building singular agents with crystal clear instructions to the agents on what to do and when - didn’t work because even though I thought instructions were clear, the agent would usually get confused and produce the wrong response (e.g. when asked to refer to the document with template X to produce a response in the template X, the agent would respond with template Y) Approach #2 - building parent agents which are dedicated to routing to specialist child agents via topics - I thought this would solve the problem I was facing with approach #1. But it didn’t work because the agent became too specialised and narrow (e.g. a child agent dedicated to creating sales messages wouldn’t then be able to then suggest ideas for a follow-up email) - and sometimes it had approach #1’s problem anyway The biggest challenge has been inconsistency in responses. I’ll give the same agent the same prompt 5 times in a row, expecting it to follow its instructions and produce a response in a specific format - and it’ll give me 5 different responses. Sometimes it gets stuck in a loop of asking endless clarifying questions, sometimes it gives me a response in a format it’s invented (rather than the template I’ve provided) and sometimes it just gives me a “sorry, I can’t do that” message - all from the same prompt. The most frustrating part is that I can’t diagnose the root cause - when I ask Copilot why it’s getting it wrong to try and solve the problem (even providing screenshots), most often it fails to answer exactly why it’s going wrong, and invents solutions that don’t exist (like pointing me to settings which don’t exist). Microsoft Learn doesn’t provide any documentation that helps, either. I’ve been using ChatGPT Pro solo for the past 3 years for everything in my job - drafting, editing, analytics, research, advice - you name it. It just works

  • it’s like my colleague at this point. Copilot feels like a massive step back. And I’m very aware that Claude is now generally regarded as ahead of ChatGPT. I’ve been trying to find any research online that directly compares Copilot with other options, but there’s very little out there. So I’ve got a simple question. Am I wasting my time with Copilot? Should I forget about building agents in Copilot Studio and make the case for Claude Team licenses instead? Or should I keep trying? submitted by /u/51765177

Originally posted by u/51765177 on r/ArtificialInteligence