Why is it that the story is always AI/AGI -vs- humans? Wouldn’t it be more likely that an AGI would consider another AI/AGI or two as higher priority threats than humans, at least to start? It seems to me that the first signs of an AGI becoming aggressive would be inexplicable setbacks for, or widespread sabotage of, competing AI/AGI models. submitted by /u/SysOp69
Originally posted by u/SysOp69 on r/ArtificialInteligence
You must log in or # to comment.
