Original Reddit post

All I’ve been doing is getting help with quant trading strategies: how to code something, results, interpretation, data, stats, etc. Every single time it says “Sorry, this conversation is getting dangerous” or whatever. All I did was paste output from Colab so it could tweak the code a bit, but instead of finishing its message it ENDED the conversation saying that “it’s getting unsafe”. I genuinely don’t understand how a model can react that way to simple requests. Another time it asked me if I would like to see something, I think it was a different strategy because the one I was testing was already overfitted. I said yes and it said “Sorry, my safety restrictions are kicking in. Let’s chat about something else”. I haven’t used gemini much in the past but I’m definitely never using it again as a substitute for Claude when I run out of credits. I’m tired of worrying whether it’s gonna get restricted or not every message. I swear, even DeepSeek has less restrictions, and they’re mostly just about China and politics (insignificant), not trading strategies. submitted by /u/DevelopmentFun4034

Originally posted by u/DevelopmentFun4034 on r/ArtificialInteligence