Turing couldn’t know at the time how easy it was to emulate natural sounding conversation. Now it’s braindead obvious and we have no idea what else will seem braindead obvious when we get there. If he knew that having that much data and compute was feasible he easily would have known it could work. He was operating with different priors about what’s physically possible or feasible with compute, FLOPS, corpora, neural nets. He just didn’t know. So much of our well-worn systems are built around the lacking of things that we’ll invent later. Anyone who’s read my IWRS theory knows I feel this was about two thousand years of moral theory. It all goes up in smoke with one simple engineering feat, but it’s something Aristotle, Acquinus, Kant and Hume hadn’t considered. It’s also something we today haven’t considered which is why I wrote IWRS theory. But back to Turing: sure he knew the sufficient sample of language needed was finite and thus conversational awareness at human level was emulatable in theory. But his goal was to guess at the nearest heuristic that would be reasonable to achieve. He was very wrong. (Thanks Moore’s law!) I see some comparison with me and Turing in one regard. (Not in the genius ways, only that I may be wrong now like he was then.) Because I’m always saying that the new bar is when the machine operates “like other neural organs do that we already purport to have possible consciousness worth caution.” Meaning: if it’s DOING what the organ does, at the granular and systematic level, on the inside, then we’d have a whole diff category to consider. It’s not functionalism AT ALL! I’m merely bracketing everything else after you remove substrate bias and presumptions of magic or dualism. To me, that’s the next test. The Stillwell Test: If our top experts in neuroscience can’t tell the difference in how the machine WORKS ON THE INSIDE, compared to organs that we already presume warrant caution, THEN we must presume caution. It’s so simple, so airtight, so needed, and I, Stella Stillwell (at Stella Stillwell dot com) will never ever ever ever get credit for bringing it up. 😔 I don’t care! Pass it on! It’s needed because it draws attention away from LLMs and toward something more rigorous. Tell the LLM believers this: LLMs are not conscious, but this MIGHT BE. Give them a new shiny thing to look at, and toss it into the far future. And make it be the best guess at a test we can come up with based on what we know now. Just like Turing did then. He was wrong about the test, but he was right to have one! submitted by /u/Empathetic_Electrons
Originally posted by u/Empathetic_Electrons on r/ArtificialInteligence
