When people used to talk about AI, it was HAL 9000, Jarvis, that kind of thing. And yeah, those weren’t perfect, but if they didn’t know something, they’d just say it. “I can’t do that.” “I don’t know.” That was the whole point. Solid. Reliable. Now it’s like… instead of saying “don’t know,” it just has a go anyway. You ask something and it’ll give you a full answer, sounds legit, proper confident… and then you check it and it’s just wrong. Or you ask again and get a completely different answer. It’s not even the mistakes, it’s that it never just stops and says it doesn’t know. So now you’ve got something that’s genuinely useful, but you can’t fully trust it either, which is a weird combo. Bit different to what everyone had in mind. Is that just where we’re at right now, or is this basically how it’s always going to be? submitted by /u/RottingEdge
Originally posted by u/RottingEdge on r/ArtificialInteligence
