Why the Eliza Effect is bad for all of us.
The Eliza Effect describes the tendency of people to attribute human-like qualities to AI – even when they are aware that the program is not truly sentient.
It highlights the human tendency to see intelligence and empathy in machines.
And it’s a problem, whether we’re the ones falling for it or not.
Origins
The Eliza Effect is named after the 1966 chatbot ELIZA built by Joseph Weizenbaum, one of the first AI researchers in the US.
It used natural language processing, which created the illusion that it understood more than it really did.
Its users ascribed all kinds of human traits to ELIZA, like emotional intelligence.
Though the participants were informed that ELIZA was a machine, they suspected it possessed a “conceptual framework” – essentially, a ‘theory of mind.’
Participants would even say “that they wanted to be able to speak to the machine in private.”
… And if that was happening with a relatively simple program in the 60s, you can imagine how much it happens now.
What it looks like now
There are the obvious examples, easily mocked, like people falling in love with AI chatbots like Replika.
Then there’s the slightly more unsettling ones, where AIs argue for their own sentience (after certain prompting, of course.)
But the big issue is the small ways the Eliza Effect sneaks into our interactions with AI. It’s a form of cognitive dissonance.
“We’re basically evolved to interpret a mind behind things that say something to us.”
And it’s not just ethical concerns. The more we get used to this type of AI – and the more it feels real to us – the impetus to improve real understanding and reasoning shrinks.
Basically, I’m worried we’ll get complacent.
And complacency kills innovation.