By John Gruber
OpenAI, Anthropic, Cursor, and Perplexity chose WorkOS over building it themselves.
A follow-up point to Friday’s post about Meta unceremoniously shitcanning its entire contract with Sama, the Kenyan contractor that employed over 1,100 contractors to serve as Mechanical Turks for Meta’s AI efforts, after a few of the contractors told investigative reporters about the incredibly private things they witnessed from footage captured by users of Meta’s AI Glasses.
There is no point getting any more outraged or disgusted at Meta for firing these contractors than you already were in the first place. They had to fire them. The moment this investigative report was published in late February, the fate of Sama’s Kenyan operation was sealed. They were toast. The key to understanding this is that Meta runs a criminal enterprise. Most of the organized crimes Meta commits aren’t crimes against the legal code (although some are), but rather crimes against public perception and human decency. Remember what they did with Onavo, their VPN product? Was that illegal? Dunno. Was it outrageous? Hell yes.
Let’s just concede for the sake of argument that there’s nothing illegal about the way Meta was sending video footage from users’ AI Glasses to contractors in Kenya to review. I presume they’re still doing it today, just with different contractors, in a different computer cubicle sweatshop, perhaps in a different country. Nothing to cover up legally. But just the plain description of what they’re doing fills people with a visceral repulsion. However, people only have that visceral reaction if they know what’s going on. Part of the whole premise is that the whole thing has to be kept on the q.t.
If it said right on the box that when you use Meta AI Glasses, the footage might be reviewed by third-party contractors, and when that footage is reviewed, you — the user whose footage is being reviewed — won’t know it’s happening and won’t get prompted first for permission (because you’ve actually OK’d it in advance just by hitting the “Accept” button on the long dense terms of service that literally almost no one reads because such terms are written in impenetrable legalese), almost no one would buy them. And if it were more widely known that this is how these glasses work, there’d be more of a social stigma surrounding those who wear them.1
That, I think, is the primary reason why the contractors were in Kenya in the first place, and their replacements (now that Meta has terminated its contract with Sama) are surely still in some third-world country. It’s not about the lower wages (but that doesn’t hurt). It’s about the fact that the entire existence of the operation is easier to keep quiet when it’s literally on the other side of the planet. It’s a goddamn marvel that the investigative reporters from those two Swedish newspapers found them.
Most illegal acts are scandalous, but many scandalous acts are perfectly legal. But all scandalous acts need to be covered up. The operation has to be kept quiet, has to be covered up, because it’s unacceptable. It’s outrageous. If this were more widely publicized, Meta would suffer on two fronts. First, it would become better known that there’s nothing artificial about some of what they call “AI” — it’s in fact powered by human intelligence, just in another hemisphere. Second, and related to the first, some of the interactions you have with Meta AI — including images and video you send it, and images and video captured by Meta AI Glasses — are reviewed by human contractors. People write things and show things to AI, thinking it’s kept private between them and a computer program, that they would never share if they knew it might be seen by human beings paid by the AI provider to refine the training and correct its mistakes. A lot of people only use these “AI” products because they have no idea what’s actually going on.
“Three may keep a secret, if two of them are dead.”
—Benjamin Franklin, Poor Richard’s Almanack
Anyway, enjoy the Meta AI built into WhatsApp and Instagram. And maybe keep a link to that report on Meta’s contractors in Kenya handy for anyone you meet who wears AI Glasses.
It’s a fascinating mystery what becomes a scandal and what doesn’t. One flaw in our news media culture is that stories from other countries, especially countries where English is not the primary language, tend never to gain traction here. You’d think the Internet, and the rise of very good automated language translation, would change this. But that doesn’t seem to be the case. After this story came out in February — a joint investigation co-published by the Swedish publications Svenska Dagbladet and Göteborgs-Posten — it just faded away after a few days. I remember thinking when I linked to it, “Man, this feels potentially explosive — this might blow up into a big scandal.” But it didn’t. I didn’t forget about it, but I hadn’t thought about it in weeks, until I happened to catch this news — via Nick Heer — that Meta had severed ties with Sama, the contracting firm.
I can’t help but think that if the exact same original report had been published by, say, The New York Times or The New Yorker, or in video form by 60 Minutes, that it might have blown up into a sizable scandal and public relations disaster for Meta. But as it stands, it largely passed without note. In addition to the fact that the original story was published in Sweden, the other missing factor is they didn’t publish leaked images or footage from users of Meta AI Glasses. We read testimony from these Kenyans that as part of their jobs, they watched AI Glasses owners having sex and going to the toilet, but we never see footage of AI Glasses owners having sex or going to the toilet. That shouldn’t make a huge difference, but human nature is such that it does. ↩︎