Original Reddit post

This time I tested the perceptron in an environment with prey. Important: the agent knows absolutely nothing. It doesn’t know what prey is, nor does it know that colliding with it relieves its stress. The code has no hunting instructions or prior training. The agent is simply there, adrift in the world with its own stress, and has to discover by pure accident that relief comes from these prey. This experiment is to see if a perceptron can make decisions and “understand” its own self-interest without being told, simply through the experience of moving from chaos to peace. https://www.reddit.com/r/BlackboxAI_/comments/1rgiwwe/synthetic_homeostasis_transition_from_blindness/ submitted by /u/Successful_Juice3016

Originally posted by u/Successful_Juice3016 on r/ArtificialInteligence