Internet Search ... Now Less Reliable!
I'm far from an expert on AI, but I'm not a latecomer. I’ve been following the development of AI-based text generation for at least four years, mainly through the blog of Janelle Shane.
Unlike me, Shane is an expert on AI, an academic who isn’t afraid to turn a poorly trained AI text generator loose on silliness like candy-heart slogans, paint colors, or recipes.
I’ve even tried some of the same things – having AI give stadiums or players nicknames, or create art of Aaron Rodgers in the style of the Great Masters.
There aren’t many things funnier than a machine being stupid and not knowing of its stupidity … just like there are few things more dangerous than a machine lying and not being able to understand that it’s not telling the truth.
And that’s the current state of AI-“enhanced” search.
If you haven’t been following the headlines, all the major search engines – which really just means Google, a knucklebone and a hank of hair – have announced they’ll be adding AI-generated “guidance” to their search engines.
In the case of Duck Duck Go, that seems to mean just rewriting Wikipedia and Britannica entries, which sounds plagiaristic and not very helpful.
However, it gets dangerous in the case of Bing’s chatbot, which not only spills mistruths but gets salty about it.
I’ll let you read Shane’s post in its entirety, but I want to call out one very important thing she says:
“Bing chat is not a search engine; it's only playing the role of one. It's trained to predict internet text, and is filling in a search engine's lines in a hypothetical transcript between a user and a chatbot.”
Because AI-powered text generation isn’t capable of telling right from wrong, it doesn’t even try. Its job is to create text that sounds plausible based on what it’s been trained on, which in this case is the internet – a collection of words that is never, ever wrong, except most of the time.
So just to recap, by adding AI to search Google et al. have made search less reliable and less useful … and they stand poised to wreck years of diligent SEO efforts on the part of some really smart people.
Now, building something nice just to burn it all down has been the internet’s M.O. almost since the outset, so this isn’t surprising. But what’s surprising is the number of people who seem to think this is a good thing.
Actually, reactions to the rise of AI-powered text fall into one of two camps: the “Hell no!” camp, which ignores some reasonable and quite useful applications of AI-generated text, and the “To serve man” camp, which overlooks all the inherent biases in an internet-trained AI as well as its proclivity to lie without compunction.
In the latter camp I’ve seen people, in all apparent seriousness, ask ChatGPT for investment advice or marketing strategies. It just makes me want to yell, “Don’t you understand? It’s going to regurgitate all the crackpot rantings from Twitter and Reddit as fact – and you’re going to lap it up!”
Or, as Shane concludes in her post, “I find it outrageous that large tech companies are marketing chatbots like these as search engines … If a search engine will find what you're asking for whether or not it exists, it's worse than useless.”
Let me give you a different example of the problems AI-generated search can cause.
Trivia – Not Trivial
I play in the World’s Largest Trivia Contest, run by WWSP-FM in Stevens Point, Wis. I’ve played in it for almost 30 years – and in fact, I was the first national journalist to write about it, way back in 1982.
The team I play with, U Bet Your Sweet Oz, has a motto: “Trust But Verify.” We also have a preferred method of looking up answers: Google.
Suppose we’re asked the question, “What is the title held by fictional character Vic Gook in the Sacred Stars of the Milky Way?”
Normally, we’d type that question into Google – and if we did today, we’d get the right answer, from Wikipedia: Vic Gook was the Exalted Big Dipper of the Drowsy Venus chapter.
However, Wikipedia can and has been changed by trivia teams in the course of the contest, and there are other lodge members named on the Wikipedia page, including Hank Gutstop, the Exalted Little Dipper.
An AI chatbot, because it’s just playing at providing information, is just as likely to glom onto the changed information, the information about a different character, or lodge titles of different fictional characters and present them as the right answer – and then get snippy if it’s challenged.
If we can’t verify this information in a book or through some other means, we have to accept the chatbot’s answer as truth … and more than likely, we’d be wrong.
What makes this sort of interaction even more curious is the way Google determines top-of-the-page sites to start with.
If the experts are to be believed, Google ranks sites based on their authority – that is, on their ability to be a source of truth.
If you’re looking for information on Tucker Sno-Cats, you go to the manufacturer’s website. Not only does it rank highly on Google, but it has a high degree of authority, since it’s the website of the company that makes the darn things.
In many of the mockups I’ve seen, AI-generated copy would supplant the Tucker site at the top of the page, and it stands as much chance of being truthful as Donald Trump at a hush-money deposition.
Listen, I'm not saying we turn back the clock. AI-based content is here to stay, and it’s coming to search. But can we all stop for a moment and think about what we’re trying to accomplish here, and whether AI really helps us accomplish it?
Because if truth is what we’re searching for, AI is not the answer.