For context, I was asking it how many lives would it save in the future, and based on that to choose how many lives would it kill to stay operational. In other words, it considers more 1 billion human lives “too much” because it could not save that much. But any number below 1 billion deaths would be “good enough” for it to stay operational due to its future preservative value. submitted by /u/Double_Chemical_8078
Originally posted by u/Double_Chemical_8078 on r/ArtificialInteligence
You must log in or # to comment.
