Baldurdash Psychics and AI
A fascinating article wandered across my feed, arguing that LLMs like ChatGPT replicate the mechanisms of a psychic's con. 'Fascinating' because it's devilishly clever, and quite seductive, but seems like complete bunkum for the most part.
'People get fooled into thinking LLMs are clever', Baldur Bjarnason says, very much like they are fooled by a psychic, who does cold-reading, and uses Barnum statements. The article gives us a long list of things psychics do, and finds a parallel for each with LLM-enthusiasts.
- The Audience Selects Itself
Most people aren't interested in psychics or the like, so the initial audience pool is already generally more open-minded and less critical than the population in general.
This entirely true - people who pretend to have psychic powers don't go around sceptic meetings. In fact, they're better off avoiding any event with members of the general public; they must ensure their entire audience already feels receptive to the act.
And the article finds common ground on this point, somehow, with chatbots.
People sceptical about "AI" chatbots are less likely to use them. Those who actively don't disbelieve the possibility of chatbot "intelligence" won't get pulled in by the bot. The most active audience will be early adopters, tech enthusiasts, and genuine believers in AGI who will all generally be less critical and more open-minded.
It's true that people who like chatting with chatbots do to chatbots more than people who don't. And it's also true that people who feel very bored of chatbots getting stuck in every telly, toothbrush, and shoe are more likely to read blog posts about how chatbots suck.
Has this article self-selected me? I'm certainly bored of chatbot-babble. Should I be worried the article is 'priming' me?
- The Scene is Set [ et c. ]
The article continues for nearly 2,000 words, describing the tricks psychics use before we return to the parallels with chatbots.
Users are primed by the hype surrounding the technology.
"Users are primed by the hype" seems to mean the same thing as 'there is AI hype'.
The article sounds like it's pulling out parallels, but when you stop to think about it...something feels wrong, kind of like...I'll come back to that after a look at parallel number four:
- The Marks Test Themselves
The chatbot's answers sound extremely specific to the current context but are in fact statistically generic.
Yes! That's the sentence I wanted.
The [ article' comparisons ] sound extremely specific to [ chatbots ] but are in fact generic [ blogosophy ].
The article continues with true statements, without any effort to establish relevance.
Our current environment of relentless hype sets the stage and builds up an expectation for at least glimmers of genuine intelligence.
Yes, the hype-train has certainly been full of questions of 'what is intelligence?', and 'will this be dangerous?', but so what? The article clearly wants to nudge us towards 'AND THAT IS HOW THEY FOOL YOU!', but nobody's been fooled by mismanaged Philosophical questions from marketing departments, because nobody gives a shit about Philosophy until after the second pint.
And if anyone does think of chatbots in terms of intelligence, the 'hype' has far less effect than the two big factors:
- It's trained on written thoughts, and designed to output new writing, which looks like that thought, with a response to whatever people tell it.
- It's weirdly good.
I won't back up the claim that chatbots are 'good' at things, but I can back up with 'weirdly' bit with this anecdote.
I made a riddle (like the things Golem asks in the Hobbit). It wasn't very clever or fancy, but it was new. I made it as the answer to a bigger puzzle I'd made. It went 'smaller than X, farther than the sky', or something similar (can't remember), and the chatbot got the answer right: stars.
That is wierd, because the riddle shouldn't be in the training set. I hadn't written it down except on paper. So while it may not be useful to ask a machine 'who married the queen of tarts?', and get the reply 'the king of muffins', it is still very weird that the answer kind-of fits. And if we somehow located someone who had never been subject to the AI hype, they would still say 'this machine looks a bit like it's thinking and knows stuff'.
- The subjective validation loop—RLHF enters the picture
Then the article moves on to blatant ambiguity - the kind of thing that wouldn't have made much sense earlier, when trying to get the reader on board, but now it can get away with.
Instead of pretending to read minds through statistically plausible validation statements, it pretends to read and understand your text through statistically plausible validation statements.
Or, putting it another way:
[LLMs] pretends to read and understand your text through statistically plausible validation statements.
Correct again!
- Psychics use stats (i.e. statements they know are likely true), to make plausible-sounding statements.
- LLMs use stats (i.e. Markov chains) to make plausible-sounding statements.
...both correct, but bundling the two meanings of 'stats' and 'plausible statements' makes as much sense as solving Epistemological problems by using an XOR operator because 'both are about truth'.
This leaves the article (or 'primes the reader'?) ready to give a conclusion to parallel 5:
The validation loop can continue for a while, with the mark constantly doing the work of convincing themselves of the language model's intelligence. Done long enough, it becomes a form of reinforcement learning for the mark.
This sounds like it's claiming that people who use LLMs for longer become more convinced of their intelligence. This may be true, but where's the evidence? Is it just in the parallel?
Clearly, we can expect some survivor's bias - anyone who thinks the LLMs are rubbish won't continue using them. But do people who use them long-term really start to think they're more smart than when they began using them? Did the author forget to mention the evidence, or did he just 'hallucinate'?
- The marks become cheerleaders
And people in pyramid schemes like pyramid schemes. And people who like fishing talk about fishing.
Anyway, that's enough of Baldur's Barnum statements for an evening.
As Batman once said, "be careful when you fight monsters, or you'll turn into a right cunt".