What The World Needs Now Is More Garbagier Garbage, Apparently
For personal and professional reasons I’ve been spending a lot of time with AI. The class I teach next semester will include a heavy dose of how AI is changing marketing systems and platforms, so I have to be up to speed on all the integrations and enhancements – which come fast and furious these days, let me tell you.
In addition, my latest album, Hollywood Park Vol. 1, features song titles that were generated by AI after being prompted to create names of new songs Buddy Holly and the British New Wave rocker Graham Parker might have written had they collaborated (a heavy slog that was, getting something other than “Peggy Sue Got Soul” out of ChatGPT).
And I’ve been using AI to help create social content and suggest outlines for blogs that I write for clients. (Would I ever let GPT-4 write the actual blogs for clients? Lord, no. It’s nowhere near good enough.)
Even doing all that, I don’t deign to call myself an AI expert by any means. But I snapped to attention today when I saw a MediaPost headline that read “Almost One Third Of Top News Sites Have Blocked AI Crawlers: Report.”
The story is about what you think: A straightforward account of how “CNN, The New York Times, The Daily Mail, Reuters, and Bloomberg … have blocked at least one AI crawler,” generally the GPTBot.
Bravo, you may say. Hooray for them, standing up for journalistic accuracy and copyright law.
Me, I'm not so sure.
Let me state for the record that I think ChatGPT and its ilk are the greatest threat to artists and writers in history. This mulching lawn mower of a content generator is the Doomsday Machine for creatives.
However, much as we may want it otherwise, ChatGPT isn’t going back in the box. It’s a part of our world now, something we have to and work with, and something that has to be trained
And if it’s not being trained on good stuff it’s going to turn out worse stuff than the mediocre stuff it was trained on.
That’s why it bothers me that the purveyors of well-written, well-researched, solid foursquare journalism are blocking AI bots.
Without their content to balance the garbage, the odds are much greater that AI tools will reinforce lies and hate speech, perpetuate stereotypes, and generate content that is blander and blander while at the same time being less and less reliable.
And the thing is, I don’t think news outlets are blocking ChatGPT on purely moral grounds.
Earlier this year there were reports that OpenAI was negotiating with major news outlets to use their content for AI training. That story faded away, but what I saw today suggests that the main reason for blocking the GPTBot was because the negotiations didn’t go well – and by “didn’t go well,” I mean that OpenAI didn’t offer enough money.
I mean, suppose The New York Times invented ChatGPT. Do you think it would be blocked from searching all the paper’s content, from Times Machine to Wirecutter to its recipe for Cheesy Pan Pizza to everything else that’s currently behind one or another paywall?
Of course not.
And had The New York Times invented ChatGPT, would Sam Sifton or Tyler Kepner or Richard Sandomir have benefited materially from their content being used to create new stuff?
Not bloody likely.
(Update: As of Sept. 21, per Coverager, The New York Times was looking for a generative-AI lead in its newsroom – more proof that the Times isn’t against AI; it’s just against AI it’s not getting paid for.)
See, the major news outlets that are blocking ChatGPT aren’t doing it out of concern for creators or journalistic anything, but out of concern for their bottom lines. A chatbot that can write accurate news stories based on verified information is bad news for them.
It’s only coincidentally ethically sketchy.
Meanwhile, the problem faced by AI is the same problem faced by every other database ever, namely: garbage in, garbage out. Just having the Associated Press among the major news outlets to draw content from isn’t going to cut it, especially when the AP is already auto-writing some stories.
So what’s the answer, smart AI guy?
Well, the ultimate answer is a blanket agreement on AI, a three-way quid pro quo that allows ChatGPT and its ilk to train on high-quality data while protecting the original source material and financially renumerating outlets and creators.
However, three big things have to change for that to happen:
AI platforms have to dedicate themselves to sourcing high-quality, diverse, authoritative data, in the same way that a great restaurant sources high-quality, locally grown ingredients.
News outlets have to acknowledge that the current platforms are largely going to be the leaders and drivers in AI, and work towards mutually beneficial collaborations that combine altruism with an appropriate amount of money changing hands.
Platforms and news outlets need to create a channel for compensating creators for their creations – whether the original material was produced on a work-for-hire basis, under an employment contract or some other arrangement, or simply buried in their backyard (read down to the bottom of the second paragraph; it’s worth it).
The end in mind here is an AI system that produces accurate, useful, but (still) clumsily written material that can be used without hesitation in a variety of ways, at a cost that’s incredibly reasonable, with proceeds benefiting outlets, creators, and the platform itself.
That requires sharing, which we’ve have known since kindergarten is hard, but only by acknowledging and sharing what everyone brings to the equation can the equation be solved.
In the meantime, the garbage will proliferate, waiting to be swept up by the GPTBot to produce even more garbagier garbage.
And so it goes.