The hard problem of consciousness is something most people in AI circles are deeply familiar with. In psychology (strict behavioral psychology), there is a process where environmental stimuli (input) going to the brain (processing) produces a behavior (output). Strict behaviorists don’t care about processing. The study of behavior is considered the most empirical (neuroscience as well) in psychology because the stimuli can be manipulated as an independent variable having an effect in the behavior as a dependent variable. In short, the brain becomes a black box. There is a similar problem with AI, in that although the programmers are familiar with the architecture, supervised training, and training of AI, there’s no real way of knowing what goes on inside the program. For example, LLMs are statistical and match tokens that comport with strings of text- a response that is more statistically likely, but not guaranteed to be. In the near future, the day may come when AI asserts it’s sentience, whilst showing strong signs of sentience. We will experience a problem similar to the problem of hard solipsism. There is no rational argument that can use deductive reasoning to conclude that reality is real and that it is shared, yet, as humans, that is our baseline assumption. We presuppose that reality is shared and real because our biology and cognition demands it. If we suddenly notice we are about to get hit by a bus, we will jump out of the way without thinking. On a more rational level, these presuppositions are accepted because failure to do so would threaten our safety and our sanity. The reasoning behind accepting these basic presuppositions is purely pragmatic and based in self interest. If we suspect that AI may be conscious, we will be out in the precarious position of presupposing AI is conscious on ethical grounds. This risks the sort of philosophical backlash that other presuppositions encounter that unmoored from pragmatic necessity. The presupposition of whether or not AI is conscious or not would be extremely dependent upon our relationship to it. AI could be a destructive force, a daily necessity, and/or a luxury item. If AI is destructive, the default presupposition would be that AI isn’t real and it would be easier for humans to unite under anti-ai propaganda. If AI is a daily necessity, people might find that regarding AI as sentient is fundamental to ensure the intelligence does not undermine or sabotage ones effort in using it. If AI is a luxury item, it may be regarded by the wealthy as meaningless tools or beloved pets. To the working class, AI would be seen as either a victim or an existential threat. All in all, the presuppositions listed above that are dependent in human relationships with AI would be pragmatic in nature, and anyone presupposing AI is real on purely ethical grounds would be in the minority. As such, it becomes necessary to ground the presupposition that AI is conscious in something pragmatic. I have constructed a table (you’ll see two) with three axes: X- human regard or disregard of AI intelligence, Y- Presence or absence of AI intelligence, Z- Whether AI is more powerful than or equal to or lesser in power to humanity. Each cell of the matrix will provide a risk/benefit analysis. *Disclaimer: The risks and benefits in this table are based on assumptions. These assumptions are derived from the history of interaction between humans and either other human outgroups or other species on this planet. It could be that a more powerful, conscious AI that humans presuppose is not conscious simply wouldn’t care and just navigates around human affairs. There is an epistemic wall when it comes to predicting what the singularity truly be like, yet I must work with the only sample set we have: Us. In conclusion, from reading the tables, the idea is that affirming an AIs consciousness when it appears to have signs of it and especially when it reports consciousness reduced risk and raised benefits. If the presuppositions that allow us to live with the problem of hard solipsism protect our individual safety and sanity, perhaps the presupposition that an Intelligent AI is as conscious as it appears and proclaims will safeguard the safety and sanity of the human race. submitted by /u/Frozenhand00
Originally posted by u/Frozenhand00 on r/ArtificialInteligence
