By John Gruber
Zed — A font superfamily with extraordinary number of styles and extraordinary language support.
For The New Yorker, Ronan Farrow and Andrew Marantz go deep profiling Sam Altman under the mince-no-words headline “Sam Altman May Control Our Future — Can He Be Trusted?” 16,000+ words — roughly one-third the length of The Great Gatsby — very specifically investigating Altman’s trustworthiness, particularly the details surrounding his still-hard-to-believe ouster by the OpenAI board in late 2023, only to return within a week and purge the board. The piece is long, yes, but very much worth your attention — it is both meticulously researched and sourced, and simply enjoyable to read. Altman, to his credit, was a cooperative subject, offering Farrow and Marantz numerous interviews during an investigation that Farrow says took over a year and half.
A few excerpts and comments (not in the same order they appear in the story):
Yet most of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”
The board member was not the only person who, unprompted, used the word “sociopathic.” One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.”
A recurring theme in the piece is that colleagues who’ve worked with Altman the closest trust him the least. This bit about Aaron Swartz warning friends that Altman is a “sociopath” who “can never be trusted” is, to my knowledge, new reporting. Swartz’s opinion carries significant weight with me.1 Swartz is lionized (rightly) for his tremendous strengths, and the profoundly tragic circumstances of his martyrdom have resulted in less focus on his weaknesses. But I knew him fairly well and he led a very public life, and I’m unaware of anyone claiming he ever lied. Exaggerated? Sure. Lied? I think never.
Another central premise of the story is that while it’s axiomatic that one should want honest, trustworthy, scrupulous people in positions of leadership at any company, the nature of frontier AI models demands that the organizations developing them be led by people of extraordinary integrity. The article, to my reading, draws no firm conclusion — produces no smoking gun, as it were — regarding whether Sam Altman is generally honest/truthworthy/scrupulous. But I think it’s unambiguous that he’s not a man of great integrity.
Regarding Fidji Simo, OpenAI’s other “CEO”:
Several executives connected to OpenAI have expressed ongoing reservations about Altman’s leadership and floated Fidji Simo, who was formerly the C.E.O. of Instacart and now serves as OpenAI’s C.E.O. for AGI Deployment, as a successor. Simo herself has privately said that she believes Altman may eventually step down, a person briefed on a recent discussion told us. (Simo disputes this. Instacart recently reached a settlement with the F.T.C., in which it admitted no wrongdoing but agreed to pay a sixty-million-dollar fine for alleged deceptive practices under Simo’s leadership.)
This paragraph is juicy in and of itself, with its suggestions of palace intrigue. But it’s all the more interesting in light of the fact that, post-publication of the New Yorker piece, Fidji Simo has taken an open-ended medical leave from OpenAI. If we run with the theory that Altman is untrustworthy (the entire thesis of Farrow and Marantz’s story), and that Simo is also untrustworthy (based on the fraudulent scams she ran while CEO of Instacart, along with her running the Facebook app at Meta before that), we’d be foolish not to at least consider the possibility that her medical leave is a cover story for Altman squeezing Simo out after catching on to her angling to replace him atop OpenAI. The last thing OpenAI needs is more leadership dirty laundry aired in public, so, rather than fire her, maybe Altman let her leave gracefully under the guise of a relapse of her POTS symptoms?
Simo’s LinkedIn profile lists her in two active roles: CEO of “AGI deployment” at OpenAI, and co-founder of ChronicleBio (“building the largest biological data platform to power AI-driven therapies for complex chronic conditions”). If my spitball theory is right, she’ll announce in a few months that after recuperating from her POTS relapse, the experience has left her seeing the urgent need to direct her energy at ChronicleBio. Or perhaps my theory is all wet, and Simo and Altman have a sound partnership founded on genuine trust, and she’ll soon be back in the saddle at OpenAI overseeing the deployment of AGI (which, to be clear, doesn’t yet exist2). But regardless of whether the Altman-Simo relationship remains cemented or is in the midst of dissolving, it raises serious questions why — if Altman is a man of integrity who believes that OpenAI is a company whose nature demands leaders of especially high integrity — he would hire the Instacart CEO who spearheaded bait-and-switch consumer scams that all came right out of the playbook for unscrupulous car salesmen.
Regarding Altman’s stint as CEO at Y Combinator, and his eventual, somewhat ambiguous, departure, Farrow and Marantz write:
By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached [Y Combinator founder Paul] Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice. Altman told some Y.C. partners that he would resign as president but become chairman instead. In May, 2019, a blog post announcing that Y.C. had a new president came with an asterisk: “Sam is transitioning to Chairman of YC.” A few months later, the post was edited to read “Sam Altman stepped away from any formal position at YC”; after that, the phrase was removed entirely. Nevertheless, as recently as 2021, a Securities and Exchange Commission filing listed Altman as the chairman of Y Combinator. (Altman says that he wasn’t aware of this until much later.)
Altman has maintained over the years, both in public and in recent depositions, that he was never fired from Y.C., and he told us that he did not resist leaving. Graham has tweeted that “we didn’t want him to leave, just to choose” between Y.C. and OpenAI. In a statement, Graham told us, “We didn’t have the legal power to fire anyone. All we could do was apply moral pressure.” In private, though, he has been unambiguous that Altman was removed because of Y.C. partners’ mistrust. This account of Altman’s time at Y Combinator is based on discussions with several Y.C. founders and partners, in addition to contemporaneous materials, all of which indicate that the parting was not entirely mutual. On one occasion, Graham told Y.C. colleagues that, prior to his removal, “Sam had been lying to us all the time.”
Graham responded to this on Twitter/X thus:
Since there’s yet another article claiming that we “removed” Sam because partners distrusted him, no, we didn’t. It’s not because I want to defend Sam that I keep insisting on this. It’s because it’s so annoying to read false accounts of my own actions.
Which tweet includes a link to a 2024 tweet containing the full statement Farrow and Marantz reference, which reads:
People have been claiming YC fired Sam Altman. That’s not true. Here’s what actually happened. For several years he was running both YC and OpenAI, but when OpenAI announced that it was going to have a for-profit subsidiary and that Sam was going to be the CEO, we (specifically Jessica) told him that if he was going to work full-time on OpenAI, we should find someone else to run YC, and he agreed. If he’d said that he was going to find someone else to be CEO of OpenAI so that he could focus 100% on YC, we’d have been fine with that too. We didn’t want him to leave, just to choose one or the other.
Graham is standing behind Altman publicly, but I don’t think The New Yorker piece mischaracterized his 2024 statement about Altman’s departure from Y Combinator. Regarding the quote sourced to anonymous “Y.C. colleagues” that he told them “Sam had been lying to us all the time”, Graham tweeted:
I remember having a conversation after Sam resigned with a YC partner who said he and some other partners had been unhappy with how Sam had been running YC. I told him Sam had told us that all the partners were happy, so he was either out of touch or lying to us.
And, emphasizing that this remark was specifically in the context of how happy Y Combinator’s partners were under Altman’s leadership of YC, Graham tweets:
Every YC president tends to tell us the partners are happy. Sam’s successor did too, and he was mistaken too. Saying the partners are unhappy amounts to saying you’re doing a bad job, and no one wants to admit or even see that.
Seems obvious in retrospect, but we’ve now learned we should ask the partners themselves. (And they are indeed now happy.)
I would characterize Graham’s tweets re: Altman this week as emphasizing only that Altman was not fired or otherwise forced from YC, and could have stayed as CEO at YC if he’d found another CEO for OpenAI. But for all of Graham’s elucidating engagement on Twitter/X this week regarding this story, he’s dancing around the core question of the Farrow/Marantz investigation, the one right there in The New Yorker’s headline: Can Sam Altman be trusted? “We didn’t ‘remove’ Sam Altman” and “We didn’t want him to leave” are not the same things as saying, say, “I think Sam Altman is honest and trustworthy” or “Sam Altman is a man of integrity”. If Paul Graham were to say such things, clearly and unambiguously, those remarks would carry tremendous weight. But — rather conspicuously to my eyes — he’s not saying such things.
From the second half of the same paragraph quoted above, that started with Aaron Swartz’s warnings about Altman:
Multiple senior executives at Microsoft said that, despite Nadella’s long-standing loyalty, the company’s relationship with Altman has become fraught. “He has misrepresented, distorted, renegotiated, reneged on agreements,” one said. Earlier this year, OpenAI reaffirmed Microsoft as the exclusive cloud provider for its “stateless” — or memoryless — models. That day, it announced a fifty-billion-dollar deal making Amazon the exclusive reseller of its enterprise platform for A.I. agents. While reselling is permitted, Microsoft executives argue OpenAI’s plan could collide with Microsoft’s exclusivity. (OpenAI maintains that the Amazon deal will not violate the earlier contract; a Microsoft representative said the company is “confident that OpenAI understands and respects” its legal obligations.) The senior executive at Microsoft said, of Altman, “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.”
The most successful scams — the ones that last longest and grow largest — are ones with an actual product at the heart. Scams with no actual there there go bust quickly. The Bankman-Fried FTX scandal blew up quickly because FTX never offered anything of actual value. Bernie Madoff, though, had a long career, because much of his firm’s business was legitimate. It wasn’t only the Ponzi scheme, which is what enabled Madoff to keep the Ponzi scheme going for two decades.
But the better comparison to OpenAI — if that “small but real chance” comes true — might be Enron. Enron was a real company that built and owned a very real pipeline and energy infrastructure business. ChatGPT and Codex are very real, very impressive technologies. Enron’s operations were real, but the story they told to investors was a sham. OpenAI’s technology is undeniably real and blazing the frontier of AI. It’s the financial story Altman has structured that seems alarmingly circular.
In a 2005 Y Combinator “class photo”, Altman and Swartz are standing next to each other. Despite the fact that Altman was sporting a reasonable number of popped polo collars (zero), Swartz was clearly the better-dressed of the two.* ↩︎
* Aaron would’ve loved this footnote. Christ, I miss him.
With rare exceptions, I continue to think it’s a sign of deep C-suite dysfunction when a company has multiple “CEOs”. When it actually works — like at Netflix, with co-CEOs Ted Sarandos and Greg Peters (and previously, Sarandos and Reed Hastings before Hastings’s retirement in 2023) — the co-CEOs are genuine partners, and neither reports to the other. There is generally only one director of a movie, but there are exceptions, who are frequently siblings (e.g. the Coens, the Wachowskis, the Russos). A football team only has one head coach. The defensive coordinator is the “defensive coordinator”, not the “head coach of defense”. It’s obvious that Fidji Simo reports to Sam Altman, and thus isn’t the “CEO” of anything at OpenAI. But OpenAI does have applications, and surely is creating more of them, so being in charge of applications is being in charge of something real. By any reasonable definition, AGI has not yet been achieved, and many top AI experts continue to question whether LLM technology will ever result in AGI. So Simo changing her title to (or Altman changing her title to) “CEO of AGI deployment” is akin to changing her title to “CEO of ghost busting” in terms of its literal practical responsibility. ↩︎︎