By John Gruber
CoverSutra Is Back from the Dead — Your Music Sidekick, Right in the Menu Bar
Elizabeth Lopatto, writing for The Verge, “Stop Using Generative AI as a Search Engine”:
Now, a defender of AI might — rightly — say that a real journalist should check the answers provided by ChatGPT; that fact-checking is a critical part of our job. I agree, which is why I’ve walked you through my own checking in this article. But these are only the public and embarrassing examples of something I think is happening much more often in private: a normal person is using ChatGPT and trusting the information it gives them.
A mistake, obviously.
One advantage old-school Google Search has over the so-called answer engines is that it links directly to primary sources. Answer engines just give you an answer, and it’s often unclear what the source is. For me, using ChatGPT or Google’s AI function creates extra work — I have to go check the answer against a primary source; old Google Search just gave me that source directly.
Lopatto’s piece was prompted by a spate of historical bullshit people have been inadvertently propagating, after their asking generative AI systems for historical examples of presidents granting pardons to family members. Most notably, a column by Charles P. Pierce at Esquire this week — now fully retracted — the entire premise of which was a supposed pardon granted by George H.W. Bush to his black-sheep son Neil Bush. No such pardon was granted.1
Lopatto’s piece is excellent, particularly the way she shows her own work. And the entire premise of her piece is that people are, in fact, embarrassing themselves (in Pierce’s case, spectacularly) and inadvertently spreading misinformation by blindly trusting the answers they’re getting from generative AI models. But I think it’s wrong to argue flatly against the use of generative AI for research, as she does right in her headline. I’ve been late to using generative AI as anything other than a toy curiosity, but in recent months I’ve started using it for work-related research. And now that I’ve started, I’m using it more and more. My basic rule of thumb is that if I’m looking for an article or web page, I use web search (Kagi); if I’m looking for an answer to a question, though, I use ChatGPT (4o). I direct (and trust) ChatGPT as I would a college intern working as a research assistant. I expect accuracy, but assume that I need to double-check everything.
Here’s how I prompted ChatGPT, pretending I intended to write about this week’s political controversy du jour:
Give me a list of U.S. presidential pardons granted to family members, friends, administration officials, and cronies. Basically I’m looking for a list of controversial pardons. I’m interested in the totality of U.S. history, but particularly in recent history, let’s say the last 100 years.
ChatGPT 4o’s response was good: here’s a link to my chat, and an HTML transcript and a screenshot. (Only the screenshot shows where ChatGPT included sources.) I’m quite certain ChatGPT’s response is completely true, and it strikes me as a fair summary of the most controversial pardons in my lifetime. My biggest quibble is that it omits Trump’s pardon of Steve Bannon, a truly outrageous pardon of a genuine scumbag who was an official White House advisor. (Bannon was indicted for a multi-million dollar scheme in which he scammed thousands of political donors into believing they were contributing funds to help build Trump’s fantasy “border wall”.) However, my asking “Any more from Trump?” as a follow-up resulted in a longer list of 13 pardons, all factual, that included Bannon.2
I want to make clear that I don’t think Lopatto is in any way a head-in-the-sand Luddite. But all of the arguments being made today against using generative AI to answer questions sound exactly like the arguments against citing web pages as sources in the 1990s. The argument then was basically “Anyone can publish anything on the web, and even if a web page is accurate today, it can be changed at any time” — which was true then and remains true today.3 But it’s just a new technology — one that isn’t going anywhere because it’s incredibly useful in ways nothing else is, but its inherent downsides will force us to adapt and learn new ways of sourcing, citing, and verifying information. The rise of the web didn’t make libraries go away. Generative AI won’t make web search go away.
If I had wanted to write a column about presidential pardons, I’d find ChatGPT’s assistance a far better starting point than I’d have gotten through any general web search. But to quote an adage Reagan was fond of: “Trust, but verify.”
Worth noting this from Lopatto: “I emailed Hearst to ask if Esquire writer Charles P. Pierce had used ChatGPT as a source for his article. Spokesperson Allison Keane said he hadn’t and declined to say anything further about how the error might have occurred.” I find it unlikely that generative AI wasn’t involved somewhere in the chain of this falsehood that Bush pardoned his son, but whatever Pierce referenced to come upon it, he fucked up good. ↩︎
One small curiosity is that ChatGPT’s list, while mostly chronological, swapped Carter and Ford. One small amusement is that the only supposedly controversial pardon ChatGPT came up with for Ronald Reagan was New York Yankees owner George Steinbrenner. A complicated man, The Boss was. ↩︎︎
Who’s to say a dog doesn’t have useful information to provide? ↩︎︎
Previous: | Andy Grove Was Right |