By John Gruber
Atoms: We are mostly sold out... but there is more!
Speaking of Geoffrey Fowler, he had an interesting premise in a column last month: are the new privacy “nutrition” labels Apple is requiring developers to supply accurate?
I downloaded a de-stressing app called the Satisfying Slime Simulator that gets the App Store’s highest-level label for privacy. It turned out to be the wrong kind of slimy, covertly sending information — including a way to track my iPhone — to Facebook, Google and other companies. Behind the scenes, apps can be data vampires, probing our phones to help target ads or sell information about us to data firms and even governments.
As I write this column, Apple still has an inaccurate label for Satisfying Slime. And it’s not the only deception. When I spot-checked what a couple dozen apps claim about privacy in the App Store, I found more than a dozen that were either misleading or flat-out inaccurate. They included the popular game Match 3D, social network Rumble and even the PBS Kids Video app. (Say it ain’t so, Elmo!) Match and Rumble have now both changed their labels, and PBS changed some of how its app communicates with Google.
The PBS Kids Video app is eyebrow-raising, but it seems to have been a genuine mistake on PBS’s part:
You can spot the squishiness of the labels in a back-and-forth I had with PBS about the app store listing for its popular PBS Kids Video app. We found the app sending my phone’s ID to Google, even though its label said it didn’t collect data that could be linked to me. PBS told me the label reflected an update to the app it eventually published on Jan. 28, in which Google no longer gets sent my ID but still helps PBS measure performance.
Effectively PBS submitted a privacy nutrition label based on changes to their app that weren’t yet — but soon were — live in the App Store. The rest of the inaccurate nutrition labels Fowler found are rather obscure apps.
Fowler concludes that these labels are useless if they’re not guaranteed to be accurate. There ought to be penalties for falsifying information on these labels. But it clearly isn’t practical for Apple to verify every label for every app in the store. I don’t think that’s any different from the mandatory nutrition labels on food products. The FDA doesn’t verify those labels — it’s the threat of penalties and bad publicity that motivate companies to report accurate information on them. I don’t know anyone who thinks mandatory food nutrition labels are useless, even though surely many of them contain incorrect information.
And if Apple’s new privacy labels are useless, why are so many apps making changes to their actual privacy policies? Would PBS have removed the tracking identifier from its PBS Kids app in the first place? I’m guessing not. It’s good to raise awareness that the information on these labels is self-reported by the developers, and that Apple doesn’t (and practically speaking can’t) verify them technically, but I think we’re already seeing clear evidence that they’re motivating developers to remove or reduce privacy-invasive tracking from their apps.
This point from Fowler, however, I agree is a major shortcoming:
Even with its update, the label is still missing an important piece of information: There’s Google inside.
Nowhere on any of Apple’s privacy labels, in fact, do we learn with whom apps are sharing our data. Imagine if nutrition facts labels left off the whole section about ingredients.
Apple’s next crack at these labels should make it mandatory to list exactly which third-parties data is shared with. ★
Google’s heft means the change could reshape the digital ad business, where many companies rely on tracking individuals to target their ads, measure the ads’ effectiveness and stop fraud. Google accounted for 52% of last year’s global digital ad spending of $292 billion, according to Jounce Media, a digital ad consultancy.
About 40% of the money that flows from advertisers to publishers on the open internet — meaning digital advertising outside of closed systems such as Google Search, YouTube or Facebook — goes through Google’s ad buying tools, according to Jounce.
I linked to this same story yesterday, when writing about Google’s opaque announcement about their advertising plans in a world where third-party cookies no longer work in Chrome. I’ve been thinking ever since about the size of these figures. Even if we take these estimates from Jounce with some sort of grain of salt, these are huge figures.
At a certain level it just doesn’t feel justified that Google should be involved with this much of the world’s advertising spend. Fundamentally, the money should be going from advertisers to content makers who are displaying the ads. Ad revenue should be, to some degree, commensurate with attention share. Google garners a humongous share of the world’s daily attention, but not half. Not even close. Google has inserted itself into the middle, yet is taking far more than a middleman-sized share of the money. It’s like finding out that half the money spent on TV advertising wasn’t going to the channels where the commercials appeared, but to the cable companies. Or that most of the money spent on newspaper ads — trying to reach newspaper readers — was going not to the newspapers but to the company that runs the presses where the papers get printed.
User tracking is fundamental to that. The desire to know as much information as possible about the audience for advertising has always been the Pied Piper lure of the industry. And Google’s ability — along with Facebook’s — to actually provide that tracking (or the fraudulent illusion of it) is what enabled them to gobble up such an outsized portion of the world’s entire ad spend. The ads that appear on Google’s own properties are one thing: search result ads and YouTube ads come to mind. But Google and Facebook’s share of ad revenue spent trying to reach people on non-Google/non-Facebook properties seems fundamentally inequitable.
What if the answer is that there’s no way for Google (or Facebook) to make the sort of money they’ve been making in a technology and cultural environment that has become deeply concerned with online privacy? I think it’s possible that we can have a world where our online activities are far more private, or a world where Google and Facebook can maintain their current outsized share of worldwide ad spending, but not both.
A world where Google sees, say, 25 percent of the world’s ad spending sounds like an amazing business, in principle. Unless you’re comparing it to the world we’re in today, where they see 50 percent — then 25 percent looks like a collapse. Privacy-invasive user tracking is to Google and Facebook what carbon emissions are to fossil fuel companies — a form of highly profitable pollution that for a very long time few people in the mainstream cared about, but now, seemingly suddenly, very many care about quite a bit. ★
Fewer than half of U.S. states offer Android and iOS tools for the “exposure notification” system the two companies announced last April, which estimate other people’s proximity via anonymous Bluetooth beacons sent from phones with the same software.
Most people in participating states have yet to activate these apps. Those who do opt in and then test positive for the coronavirus that causes COVID-19 must opt in again by entering a doctor-provided verification code into their apps.
That second voluntary step generates anonymous warnings to other app users who got close enough to the positive user for long enough — again, as approximated from Bluetooth signals, not pinned down via GPS — to risk infection and to need a COVID-19 test.
So if your copy of one of these apps has remained silent, you’re not alone.
“Nobody in my circle has gotten the phone alert,” said Jeffrey Kahn, director of the Johns Hopkins Berman Institute of Bioethics in Baltimore and editor of a 2020 book on the ethics of digital contact tracing.
I’ve been curious about this for a while, so I asked on Twitter whether any of my followers had gotten notifications through this system. A few have! But I think the whole idea is fundamentally flawed. Even putting aside the fact that fewer than half of U.S. states offer the apps — a big issue to put aside — the only people who are using them are people who are conscientious about COVID exposure in the first place.
New Jersey has a population of about 9 million people. As of today, there have been about 800,000 cumulative reported cases of COVID-19 in the state. 600,000 users have used their app since it was launched. Via information displayed in the app itself, the total number of users who’ve uploaded their randomized/anonymized IDs after testing positive? 1,046. The total number of users who’ve been sent an exposure alert notification? 1,894. (My home state of Pennsylvania uses the same “COVID Alert” base app as New Jersey, but doesn’t seem to publish any numbers regarding usage.)
The whole endeavor seems pointless, looking at these numbers. If anything, it might be giving the users of these apps a false sense of security. If you use one of these apps and are exposed to someone who later tests positive, the odds that that person both uses the app and will report their positive test result seem not just low but downright infinitesimal. ★
In my piece yesterday about email tracking images (“spy pixels” or “spy trackers”), I complained about the fact that Apple — a company that rightfully prides itself for its numerous features protecting user privacy — offers no built-in defenses for email tracking.
A slew of readers wrote to argue that Apple Mail does offer such a feature: the option not to load any remote resources at all. It’s a setting for Mail on both Mac and iOS, and I know about it — I’ve had it enabled for years. But this is a throwing-the-baby-out-with-bath-water approach. What Hey offers — by default — is the ability to load regular images automatically, so your messages look “right”, but block all known images from tracking sources (which are generally 1×1 px invisible GIFs).
Typical users are never going to enable Mail’s option not to load remote content. It renders nearly all marketing messages and newsletters as weird-looking at best, unreadable at worst. And when you get a message whose images you do want to see, when you tell Mail to load them, it loads all of them — including trackers. Apple Mail has no knowledge of spy trackers at all, just an all-or-nothing ability to turn off all remote images and load them manually.
Mail’s “Load remote content in messages” option is a great solution to bandwidth problems — remember to turn it on the next time you’re using Wi-Fi on an airplane, for example. It’s a terrible solution to tracking. No one would call it a good solution to tracking if Safari’s only defense were an option not to load any images at all until you manually click a button in each tab to load them all. But that’s exactly what Apple offers with Mail. (Safari doesn’t block tracking images, but Safari does support content blocking extensions that do — one solution for Mail would be to enable the same content blocker extensions in Mail that are enabled in Safari.)
How does Hey know which images are trackers and which are “regular” images? They can’t know with absolute certainty. But they’ve worked hard on this feature, and have an entire web page promoting it. From that page:
HEY manages this protection through several layers of defenses. First, we’ve identified all the major spy-pixel patterns, so we can strip those out directly. When we find one of those pesky pixels, we’ll tell you exactly who put it in there, and from what email application it came. Second, we bulk strip everything that even smells like a spy pixel. That includes 1x1 images, trackers hidden in code, and everything else we can do to protect you. Between those two practices, we’re confident we’ll catch 98% of all the tracking that’s happening out there.
But even if a spy pixel sneaks through our defenses (and we vow to keep them updated all the time!), you’ll have an effective last line of defense: HEY routes all images through our own servers first, so your IP address never leaks. This prevents anyone from discovering your physical location just by opening an email. Like VPN, but for email.
Apple should do something similar: identify and block spy trackers in email by default, and route all other images through an anonymizing proxy service.1 And, like Hey, they should flag all emails containing known trackers with a shame badge. It’s a disgraceful practice that has grown to be accepted industry-wide as standard procedure, because the vast majority of users have no idea it’s even going on. Through reverse IP address geolocation, newsletter and marketing email services track not just that you opened their messages, but when you opened them, and where you were (to the extent that your IP address reveals your location).
Don’t get me started on how predictable this entire privacy disaster was, once we lost the war over whether email messages should be plain text only or could contain embedded HTML. Effectively all email clients are web browsers now, yet don’t have any of the privacy protection features actual browsers do. ↩︎︎
I don’t generally write about features in beta versions of iOS. In fact, I don’t generally install beta versions of iOS, at least on my main iPhone. But the new “Unlock With Apple Watch” feature, which kicks in when you’re wearing a face mask, was too tempting to resist.
First things first: to use this feature, you need to install iOS 14.5 on your iPhone and WatchOS 7.4 on your Apple Watch (both of which are, at this writing, on their second developer betas). So far, for me, these OS releases have been utterly reliable. Your mileage may vary, and running a beta OS on your daily-carry devices is always at your own risk. But I think the later we go in OS release cycles, the more stable the betas tend to be. Over the summer, between WWDC and the September (or October) new iPhone event, iOS releases can be buggy as hell. The x.1 releases are usually the stable ones, and the releases after that tend to be very stable in beta — Apple uses these releases to fix bugs and to add new features that are stable. If anything, I think iOS 14.5 is very stable technically, and only volatile politically, with the new opt-in requirement for targeted ad user tracking.
After using this feature for a few weeks now, I can’t see going back. As the designated errand runner in our quarantined family, it’s a game changer. Prior to iOS 14.5, using a Face ID iPhone while wearing a face mask sucked. Every single time you unlocked your phone, you needed to enter the passcode/passphrase. The longer your passcode, the more secure it is (of course), but the more annoying it is to enter incessantly.
“Unlock With Apple Watch” eliminates almost all of that annoyance. It’s that good. It’s optional (as it should be), and off by default (also as it should be, for reasons explained below). It’s easy to turn on in Settings on your iPhone: go to Face ID & Passcode, enter your passcode, and scroll down to the “Unlock With Apple Watch” section, where you’ll find toggles for each Apple Watch (running WatchOS 7.4 or later) paired with your iPhone.
Here is how the feature seems to work.
Does Face ID work normally? I.e. is the face in front of the phone you, the owner, and are you not wearing a mask? If so, unlock normally. Normal non-mask Face ID is unchanged when this feature is enabled.
If Face ID fails, is there a face wearing a mask in front of the phone? If so, is an authorized Apple Watch in a secure state (i.e. the watch itself is unlocked and on your wrist) and very close to the iPhone? If so, unlock, and send a notification to the watch stating that the watch was just used to unlock this iPhone. The notification sent to the watch includes a button to immediately lock the iPhone.
Because it’s a two-step process (step #1 first, then step #2), it does take a bit longer than Face ID without a mask (which is really just step #1). But it works more than fast enough to be a pleasant convenience experience. Regular Face ID is so fast you forget it’s even there; “Unlock With Apple Watch” is slow enough that you notice it’s there, but fast enough that it isn’t a bother.
It’s important to note that in step #2, it works with any face wearing a mask. It’s not trying to do a half-face check that your eyes and forehead look like you, or anything like that. My iPhone will unlock if my wife or son is the face in front of my iPhone — but only if they’re wearing a mask, and only if my Apple Watch is very close to the phone. I’d say less than 1 meter — pretty much about what you would think the maximum distance would be between a watch on one wrist and an iPhone in the other hand.
When this feature kicks in, you always get a wrist notification telling you it happened, with just one button: “Lock iPhone”. If you tap this button, the iPhone is immediately hard-locked and requires your passcode to be re-entered even if you take your mask off. (It’s the same hard-locked mode you can put your iPhone into manually by pressing and holding the power button and one of the volume buttons — a good tip to remember when going through a security checkpoint or any other potential encounter with law enforcement.)
I’m not sure if anyone will be annoyed by this mandatory wrist notification, but they shouldn’t be, and it shouldn’t be optional. You want this notification every time to prevent anyone from surreptitiously unlocking your iPhone near you, just by putting a face mask on.
Also, if your Apple Watch is in Sleep mode (the bed icon in WatchOS’s Control Center), the feature does not work.
It’s occasionally slow. And two or three times, I got a message on my iPhone that my watch was too far away for the feature to work, even though I raised my watch-wearing wrist next to the phone. These hiccups were rare, and to my recollection, I only ran into them with iOS 14.5 beta 1, not beta 2.
Even in the worst case scenario, where the feature doesn’t work, you’re no worse off than you were before the feature existed: you simply have to manually enter your phone’s passcode.
Last but not least, the “Unlock With Apple Watch” feature very specifically seems to be looking for a face wearing a face mask. The feature does not kick in if Face ID fails for any other reason — like, say, if you’re wearing sunglasses with lenses that Face ID can’t see through. (I wish they’d make this work with sunglasses, too.)
Throwing Shade: There seems to be some confusion over what I’m asking for w/r/t sunglasses. Face ID has always supported an option to turn off “Require Attention for Face ID”. When off, Face ID will work even if it doesn’t detect your eyes looking at the screen. (It’s an essential accessibility feature for people with certain vision problems.) If you own sunglasses that the iPhone’s TrueDepth camera system can’t “see” through, you can disable “Require Attention for Face ID” to allow Face ID to work while you’re wearing your shades.
This is far from ideal though, because it weakens Face ID all the time, not just when you’re wearing sunglasses. What’s nice about the new “Unlock With Apple Watch” feature is that it only applies when you’re wearing a mask and your Apple Watch. What I’m saying I’d like to see Apple support is an extension of “Unlock With Apple Watch” that would do the same thing for sunglasses that it currently does for face masks. I’ve heard from readers who have trouble with Face ID when wearing their motorcycle helmets, too, and I’m sure there are other examples. Basically, I’d like to see Apple add the option of trusting your Apple Watch to unlock your iPhone in more scenarios where your face can’t be recognized. My request is very different from, and more secure than, the existing “Require Attention” feature.
(Speaking of which, while wearing a mask, “Unlock With Apple Watch” does not check for whether your eyes are looking at the display, regardless of your setting for “Require Attention for Face ID”. Again, this makes sense, because it’s not Face ID — “Unlock With Apple Watch” is an alternative authentication method that kicks in after Face ID has failed.)
Apple Pay: I didn’t mention the fact that “Unlock With Apple Watch” does not work with Apple Pay. This makes sense, because however secure “Unlock With Apple Watch” is (and I think it’s quite secure), it’s not as secure as Face ID authenticating your actual face. For payments, you obviously want the highest level of secure authentication.
Also, for Apple Pay, if you’re wearing your Apple Watch (a requirement for “Unlock With Apple Watch”), you can just use your Apple Watch for Apple Pay.
It also doesn’t work with apps that use Face ID for authentication within them. Banking apps, for example, or unlocking locked notes in Apple Notes. But this makes sense too — the feature is specifically called “Unlock With Apple Watch”. It unlocks your phone, that’s it. Anything else that requires Face ID for secure authentication still requires Face ID. ★