By John Gruber
Mux is video infrastructure for developers.
David E. Sanger and Julian E. Barnes, reporting for The New York Times last week:
TikTok has long presented a parenting problem, as millions of Americans raising preteens and teenagers distracted by its viral videos can attest. But when the C.I.A. was asked recently to assess whether it was also a national security problem, the answer that came back was highly equivocal.
Yes, the agency’s analysts told the White House, it is possible that the Chinese intelligence authorities could intercept data or use the app to bore into smartphones. But there is no evidence they have done so, despite the calls from President Trump and Secretary of State Mike Pompeo to neutralize a threat from the app’s presence on millions of American devices.
The fact that TikTok is owned and controlled by a Chinese company is reason enough to be suspicious, and in my opinion reason enough to ban the service. But there is no evidence to date that TikTok is a security threat in the sense that their app might be doing anything secretly nefarious on our phones. We should be concerned just by what TikTok is doing on the surface, what we see and know TikTok does — its potential as a propaganda arm of the PRC.
In terms of intercepting data it shouldn’t have access to, there’s no sign TikTok has ever done so and there’s no reason to think they could even if they wanted to. All iOS and Google Play Android apps are installed in sandboxes — no app, TikTok or otherwise, has access to data outside its sandbox. Sandbox is arguably a poorly-chosen word from a layperson’s perspective. It’s nothing like a real-world playground where kids can, if unsupervised, freely move from one sandbox to another (or throw sand outside their own sandbox). An application sandbox is like a virtual world unto itself. They’re implemented technically, not through voluntary compliance. They are technical containers that limit what an app can see and do, not a list of guidelines of what an app should see and do. An app doing something outside its sandbox — without explicit permission from the platform via an entitlement — is exploiting a security vulnerability, not merely breaking the rules.
Obviously it’s possible that TikTok could be exploiting vulnerabilities in iOS and/or Android to access data they shouldn’t be able to, but there’s never been any suggestion that they are. And if they were, it would be huge news, an enormous scandal both for TikTok and the platform vendor (Apple or Google). Why risk it? TikTok is already sitting on a veritable golden goose, collecting information about what sorts of videos hundreds of millions of users around the world enjoy watching, and controlling the secret black-box algorithm that determines what new videos those hundreds of millions of people see each time they open TikTok and swipe up. Even if TikTok’s intentions are truly evil, they’d be fools to risk what they have by exploiting security bugs to, say, try to read emails or texts or whatever people are spooked they’re doing.
Being spooked that TikTok is secretly stealing your emails is like being spooked that the ATMs in a casino are stealing your bank account PIN code. Why in the world would a casino run a crooked ATM when you’re using the ATM to take out money to lose to them in slot machines? It’s theoretically possible but makes little sense.
But what about stories like the Biden campaign banning TikTok from staffers’ phones? Or Amazon’s internal IT department sending out a company-wide blast declaring that TikTok must be removed from any device that accesses company email, only to issue a “never mind” retraction hours later? People see these stories and assume TikTok must be doing something nefarious. But I very much suspect that the stories themselves are driven by the general hysteria around TikTok being up to no good.
Organizations like the Biden campaign and Amazon’s IT department are banning TikTok not because they know something is fishy about it but as a sort of “cover your ass just in case” thing. Given what happened to the Clinton campaign’s email in 2016, you can’t blame the Biden campaign for being overly cautious, if not downright paranoid. I recommend paranoia for them. But I think the reason Amazon so quickly reverted its ban on TikTok was because the initial ban was based on nothing more than hearsay.
Consider Apple and Google. Both companies have an inordinate amount of intellectual property to protect. Both companies are surely deeply concerned about the Chinese government, in particular, attempting to infiltrate their systems. Both companies also have consumer brand reputations to protect with the App Store and Play Store. If either company had any actual reason to suspect TikTok of malfeasance, they’d remove TikTok from their app stores. Surely, the security experts at both companies have examined TikTok with more attention than most apps get.
And even if they couldn’t prove anything, they’d surely be among the very first companies to ban the TikTok app from employee devices if there were any reason to suspect they should. But they haven’t. Apple and Google employees are free to swipe swipe swipe in TikTok to their hearts’ content on the same devices they use for work communication.
We should be concerned about TikTok only because of what we know they have access to: TikTok users’ attention and their interests. There’s no evidence that TikTok has access to anything else on our phones that they shouldn’t.