By John Gruber
Streaks: The to-do list that helps you form good habits. For iPhone, iPad and Mac.
I remember reading and enjoying this profile of Sam Altman that was published in The New Yorker in October 2016, but I stumbled across it again over the weekend, and read it with new eyes. When published, Altman was running Y Combinator, and the profile largely focuses on that. But OpenAI — then new and mysterious — was mentioned quite a bit, and those are the bits that struck me now:
A.I. technology hardly seems almighty yet. After Microsoft launched a chatbot, called Tay, bullying Twitter users quickly taught it to tweet such remarks as “gas the kikes race war now”; the recently released “Daddy’s Car,” the first pop song created by software, sounds like the Beatles, if the Beatles were cyborgs. But, Musk told me, “just because you don’t see killer robots marching down the street doesn’t mean we shouldn’t be concerned.” Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana serve millions as aides-de-camp, and simultaneous-translation and self-driving technologies are now taken for granted. Y Combinator has even begun using an A.I. bot, Hal9000, to help it sift admission applications: the bot’s neural net trains itself by assessing previous applications and those companies’ outcomes. “What’s it looking for?” I asked Altman. “I have no idea,” he replied. “That’s the unsettling thing about neural networks — you have no idea what they’re doing, and they can’t tell you.”
OpenAI’s immediate goals, announced in June, include a household robot able to set and clear a table. One longer-term goal is to build a general A.I. system that can pass the Turing test — can convince people, by the way it reasons and reacts, that it is human. Yet Altman believes that a true general A.I. should do more than deceive; it should create, discovering a property of quantum physics or devising a new art form simply to gratify its own itch to know and to make. While many A.I. researchers were correcting errors by telling their systems, “That’s a dog, not a cat,” OpenAI was focussed on having its system teach itself how things work. “Like a baby does?” I asked Altman. “The thing people forget about human babies is that they take years to learn anything interesting,” he said. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
To my mind, OpenAI’s GPT chat passes the Turing test. Artificial general intelligence is nascent, to be sure, but it’s no longer in the future. It’s the present.
★ Monday, 27 March 2023