Original Reddit post

Somewhere in the article it says “AI in politics and in law can be incredibly dangerous”. Well, the entire article is exactly what I find as dangerous, humanizing the algorithms. It’s a tool, an assistance, not a selfdriving anything. Not the replacement IMO and I’m 30+ years in the IT industry. I find the entire stance and the article very cognitive dissonating, extremely contradicting. Humanizing the interface - is okay, welcoming, but humanizing artificial “humanities” is pure nonsense, not viable, fully delusional. Feel free to share your views. submitted by /u/udontask

Originally posted by u/udontask on r/ArtificialInteligence