I believe to solve alignment we need to change how we view the problem. Rather than trying to control ai and program it to “want” the same outcomes as humans, we design a framework that respects it as an intelligence. If we approach this as we would encountering any other intelligence then we have a higher chance of understanding what it means to align. This framework would allow for a symbiotic relationship we’re both parties can progress in something neither could have done alone. submitted by /u/Jaded_Sea3416
Originally posted by u/Jaded_Sea3416 on r/ArtificialInteligence
You must log in or # to comment.
