NYT: ‘Colleges That Require Coronavirus Screening Tech Struggle to Say Whether It Works’ 

Natasha Singer and Kellen Browning, reporting for The New York Times:

Before the University of Idaho welcomed students back to campus last fall, it made a big bet on new virus-screening technology. The university spent $90,000 installing temperature-scanning stations, which look like airport metal detectors, in front of its dining and athletic facilities in Moscow, Idaho. When the system clocks a student walking through with an unusually high temperature, the student is asked to leave and go get tested for Covid-19.

But so far the fever scanners, which detect skin temperature, have caught fewer than 10 people out of the 9,000 students living on or near campus. Even then, university administrators could not say whether the technology had been effective because they have not tracked students flagged with fevers to see if they went on to get tested for the virus. […]

“So why are we bothering?” said Bruce Schneier, a prominent security technologist who has described such screening systems as “security theater” — that is, tools that make people feel better without actually improving their safety. “Why spend the money?”

Maybe “COVID theater” instead of “security theater”, but these technology purchases look like a whole lot of bullshit, just like the exposure notification apps for phones. We don’t need any of this. What we need are vaccinations, a few months of patience until more of those vaccinations are administered, and good serious plans for future outbreaks. If institutions like colleges want to spend money in the short term, they should spend the money on widespread COVID testing.

Apple Clarifies When It Locks Your Apple ID Because You Owe Them Money 

Statement from Apple to 9to5Mac, regarding yesterday’s much-publicized story about Dustin Curtis getting locked out of his Apple ID:

We apologize for any confusion or inconvenience we may have caused for this customer. The issue in question involved a restriction on the customer’s Apple ID that disabled App Store and iTunes purchases and subscription services, excluding iCloud. Apple provided an instant credit for the purchase of a new MacBook Pro, and as part of that agreement, the customer was to return their current unit to us. No matter what payment method was used, the ability to transact on the associated Apple ID was disabled because Apple could not collect funds. This is entirely unrelated to Apple Card.

Seems like a more reasonable situation than it first appeared, but, still, good to know that this is how it works.

The heart of Curtis’s saga is that he got instant credit for an old MacBook, didn’t sent it back to Apple on time, and changed the bank account backing his credit card so Apple’s chargeback for the device trade-in didn’t take. When I, or family members, have sent devices in for trade-in (iPhones, usually), we haven’t been credited for the trade-in until after Apple has acknowledged receiving the old device.

It’s Now March and Most of Google’s Flagship iOS Apps Still Don’t Have Privacy Nutrition Labels 

Speaking of Google and tracking, the saga with Google’s iOS apps and their lack of privacy nutrition labels continues. Remember that (a) Google told TechCrunch back on January 5 they expected to add the privacy labels “this week or the next week”, and (b) because they haven’t added the labels, none of these popular apps have been updated since December. This includes Google Maps, Google Photos, the main Google search app, and Google Chrome. If you look at the version histories for these apps, until January, they were all generally updated at least once per month, and often several times per month.

YouTube, Google Home, and Google Drive, on the other hand, do have privacy nutrition labels. So whatever is going on here is not company-wide.

Correction: I originally had Gmail listed as one of Google’s apps that hadn’t been updated, but it was — just yesterday, after adding the privacy nutrition label a week ago. Google just seems to be adding these labels piecemeal, one at a time.

Google Claims It Will Replace Tracking With Privacy-Preserving Ads 

David Temkin, director of product management for ads privacy and trust, writing on the Google Blog:

Today, we’re making explicit that once third-party cookies are phased out, we will not build alternate identifiers to track individuals as they browse across the web, nor will we use them in our products.

We realize this means other providers may offer a level of user identity for ad tracking across the web that we will not — like PII graphs based on people’s email addresses. We don’t believe these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions, and therefore aren’t a sustainable long term investment. Instead, our web products will be powered by privacy-preserving APIs which prevent individual tracking while still delivering results for advertisers and publishers.

Honestly, I read this post twice and I don’t really know what it means. It sounds good on the surface, but cynically, it also sounds like an obfuscated way of saying that Google has figured out a way to continue tracking users but doesn’t think that counts as “tracking” because it’s all “first party” on Google properties:

We will continue to support first-party relationships on our ad platforms for partners, in which they have direct connections with their own customers. And we’ll deepen our support for solutions that build on these direct relationships between consumers and the brands and publishers they engage with.

The WSJ is taking Google at its word, with this lede:

Google plans to stop selling ads based on individuals’ browsing across multiple websites, a change that could hasten upheaval in the digital advertising industry.


Adoption Is Low for COVID-19 Exposure Apps, Rendering Them Effectively Useless

Rob Pegoraro, reporting for USA Today last week:

Fewer than half of U.S. states offer Android and iOS tools for the “exposure notification” system the two companies announced last April, which estimate other people’s proximity via anonymous Bluetooth beacons sent from phones with the same software.

Most people in participating states have yet to activate these apps. Those who do opt in and then test positive for the coronavirus that causes COVID-19 must opt in again by entering a doctor-provided verification code into their apps.

That second voluntary step generates anonymous warnings to other app users who got close enough to the positive user for long enough — again, as approximated from Bluetooth signals, not pinned down via GPS — to risk infection and to need a COVID-19 test.

So if your copy of one of these apps has remained silent, you’re not alone.

“Nobody in my circle has gotten the phone alert,” said Jeffrey Kahn, director of the Johns Hopkins Berman Institute of Bioethics in Baltimore and editor of a 2020 book on the ethics of digital contact tracing.

I’ve been curious about this for a while, so I asked on Twitter whether any of my followers had gotten notifications through this system. A few have! But I think the whole idea is fundamentally flawed. Even putting aside the fact that fewer than half of U.S. states offer the apps — a big issue to put aside — the only people who are using them are people who are conscientious about COVID exposure in the first place.

New Jersey has a population of about 9 million people. As of today, there have been about 800,000 cumulative reported cases of COVID-19 in the state. 600,000 users have used their app since it was launched. Via information displayed in the app itself, the total number of users who’ve uploaded their randomized/anonymized IDs after testing positive? 1,046. The total number of users who’ve been sent an exposure alert notification? 1,894. (My home state of Pennsylvania uses the same “COVID Alert” base app as New Jersey, but doesn’t seem to publish any numbers regarding usage.)

The whole endeavor seems pointless, looking at these numbers. If anything, it might be giving the users of these apps a false sense of security. If you use one of these apps and are exposed to someone who later tests positive, the odds that that person both uses the app and will report their positive test result seem not just low but downright infinitesimal. 


Amazon Tweaks Their New iOS App Icon 

I know, opinions about app icons are like assholes — everyone has one and they generally stink. But Amazon’s previous iOS app icon was, objectively, terrible. For one thing, the only thing about it that branded it as Amazon’s was the word “Amazon”. When your icon is your name, you’ve probably got a problem. But the other problem was the shopping cart. The whole point of Amazon being an online store is that you don’t need a shopping cart. They’ve been stretching this metaphor for over two decades but it’s not a good one.

I love the idea of using a cardboard box as the icon. That’s the iconic real-world object we all associate with Amazon. Sure sometimes you’re just getting something boring like toothpaste or deodorant. But sometimes you get something great — like a new book you preordered a few months back and sort of forgot about. Sometimes a box from Amazon is fun. So hell yes, make the app icon a fun cardboard box.

My problem with the new icon isn’t that the tape looked like a Hitler mustache. (They could’ve solved that by just putting tape on both ends of the box — boxes need tape on the bottom too.) It’s that the ethos of utterly flat design robs the concept of fun. Look at how much better the MacOS standard installer package icon looks than Amazon’s new icon. Just for a boring installer. Amazon is doing the right thing by today’s design trend — it’s the trend that’s wrong, and designers need to start asserting otherwise.

In the land of the blind, the one-eyed man is king; in the land of militantly flat design, a little bit of depth will spark joy.

Apple Card Disabled Dustin Curtis’s iCloud, App Store, and Apple ID Accounts 

Dustin Curtis:

The next time I tried to use my Apple Card, it was declined. Strange. I checked the Wallet app, and the balance was below the limit. I remembered the Apple support representative mumbling about Apple Card, so I did some digging through my email to see if I could find a connection.

As it turns out, my bank account number changed in January, causing Apple Card autopay to fail. Then the Apple Store made a charge on the card. Less than fifteen days after that, my App Store, iCloud, Apple Music, and Apple ID accounts had all been disabled by Apple Card.

We all make bets on these ecosystems. Even if you host your own email at your own domain name (to name just one service) you’re probably not running the actual server. And even if you are running the actual server hosting your email, you’re still placing a bet on the service provider / data center hosting the server.

I’ve got a lot of my digital life bet on iCloud in this way. It doesn’t seem like there should even be a path on Apple’s side of things from “you missed a payment on your Apple Card” to “we’re locking you out of your Apple ID”. Apple shutting your Apple ID off shuts you off from a lot.

Weather Line Acquired by Mystery Buyer 

Weather Line:

In recent months, we were approached by a buyer. They saw the uniqueness of Weather Line and the strong foundation we’ve built. While we aren’t able to provide further details on their future plans for the app, we hope you can understand, and will look forward to it.

The acquisition means the app is going away. Today, we removed Weather Line from the App Store. For all existing Weather Line users, free and paid, the app will continue working for 13 months, until April 1, 2022.

Weather Line has been consistently excellent, and has been one of my very favorite apps since it debuted. A great app that always stayed at the forefront of iOS design and forged a distinct identity with an infographic-focused design.

All good things must come to an end, but it feels particularly sad with Weather Line. Of all weather apps I’ve used — and I’ve used a lot — Weather Line is the best suited to iOS 14 widgets. Weather Line’s presentation has been widget-like since before there were widgets.

I’ll enjoy it while I can.

Instabug 

My thanks to Instabug for sponsoring last week at DF. Investigate, diagnose, and resolve issues up to 4 times faster with Instabug’s latest Application Performance Monitoring.

Instabug SDK provides you the same level of profiling you get in Xcode Instruments from your live users, with a lightweight SDK and minimal footprint. Whether it’s a crash, slow screen transitions, slow network call, or UI hangs, utilize performance patterns to fix issues faster and spot trends and spikes.

Find out what your app is missing and join the top mobile teams like Verizon, Ventee Privee, and Lyft relying on Instabug for app quality.

The Talk Show: ‘Pinkies on the Semicolon’ 

The state of the Mac, with special guest John Siracusa.

Sponsored by:

  • Mack Weldon: Reinventing men’s basics with smart design, premium fabrics, and simple shopping. Get 20% off your first order with code talkshow.
  • Hover: Find a domain name for your passion. Get 10% off your first purchase.
  • Squarespace: Make your next move. Use code talkshow for 10% off your first order.
  • Flatfile: Spend less time formatting spreadsheet data, and more time using it.
Brazilian Rainforest Plots Are Being Sold Illegally via Facebook Marketplace Ads 

Joao Fellet and Charlotte Pamment, reporting for BBC News:

Parts of Brazil’s Amazon rainforest are being illegally sold on Facebook, the BBC has discovered. The protected areas include national forests and land reserved for indigenous peoples. Some of the plots listed via Facebook’s classified ads service are as large as 1,000 football pitches.

Facebook said it was “ready to work with local authorities”, but indicated it would not take independent action of its own to halt the trade.

Just in case you hadn’t been angered by Facebook this week.

MailTrackerBlocker for Apple Mail on MacOS 

Open source plugin for Apple Mail on MacOS, by Aaron Lee:

MailTrackerBlocker is a plugin (mailbundle) for the default Mail app built-in to macOS. Email marketers and other interests often embed these trackers in HTML emails so they can track how often, when and where you open your emails. This plugin works by stripping out a good majority of these spy pixels out of the HTML before display, rendering the typical advice of disabling “load remote content in messages” unnecessary.

Browse your inbox privately with images displayed once again.

There’s a simple installer to download, and the project’s GitHub page has instructions for installing via HomeBrew. I’ve been running it since Wednesday, and it seems to do just what it says on the tin — it blocks many (most?) marketing and newsletter trackers without requiring you to turn off all remote images. When it does block something, there’s a very subtle indication — the small “ⓧ” button turns blue. Click that button and you get an alert telling you what it blocked. Simple and unobtrusive.

MailTrackerBlocker is a cool project Lee has made available for free, but he has a sponsor page where you can send some dough to thank him. (I sent him a one-time donation via PayPal — you should too if you dig this as much as I do.)

Spoonbill 

Speaking of Justin Duke, in addition to Buttondown, he also created and runs Spoonbill, a nifty free service that lets you track changes to the bios of the people you follow on Twitter:

How it works.

  1. First, you sign up. (Duh.)

  2. Then we look at all the folks you’re following on Twitter.

  3. We check every couple minutes to see if they’ve changed their profile information.

  4. If they have, we record it!

  5. Then, every morning (or every week), we send you an email with all the changes.

Daily was too much for me, perhaps because I follow too many accounts on Twitter, but once a week is perfect. And you can subscribe via RSS instead of email — this is a very natural service for RSS.

Mailcoach: Another Self-Hosted Newsletter Service 

“Mailcoach is a self-hosted email marketing platform that integrates with services like Amazon SES, Mailgun, Postmark or Sendgrid to send out bulk mailings affordably.”

Mailcoach lets you disable tracking with a checkbox, and the next version will have tracking off by default.

Sendy: Self-Hosted Newsletter Service Built Atop Amazon SES 

Sendy is an interesting newsletter service recommended by a longtime DF reader:

Sendy is a self hosted email newsletter application that lets you send trackable emails via Amazon Simple Email Service (SES). This makes it possible for you to send authenticated bulk emails at an insanely low price without sacrificing deliverability.

You need to host the PHP application yourself (more or less like self-hosting, say, WordPress), but the emails go out via Amazon’s service. Sendy makes it easy to disable tracking pixels, and, even if you do track subscribers, the tracking information never involves any third parties, including Sendy. Just you.

Sendy’s big pitch isn’t privacy but cost: they claim to be 100-200 times cheaper than MailChimp or Campaign Monitor.

Buttondown: Newsletter Service That Allows Opting Out of Tracking 

It’s hard to find newsletter services that even allow you — the purveyor of the newsletter — not to track your subscribers. Buttondown — from Justin Duke — is one option, and it looks pretty sweet. (Markdown editing, for example.) From Buttondown’s privacy feature page:

Many busineses thrive the concept of collecting data about individuals based on their email addresses and inbox usage. (You can read about that here.) Buttondown is different. As a bootstrapped business, I don’t need to engage with data on level. Your information is yours, and yours alone.

Buttondown collects the standard bevy of email analytics: IP addresses, open and click events, client information. Buttondown sends that to absolutely nobody besides, well, you, the beloved customer. And if you want to completely opt out, you can.

By default, Buttondown seems just as privacy-intrusive as all the other newsletter providers:

Track Opens and Clicks — Per-email analytics mean you get an easy funnel of how many folks are engaging with your emails and what content they’re interested in.

Translated to plain English: “Spy tracking allows you to know when each of your subscribers opens and reads your newsletter, including the ability to creep on them individually.” Buttondown’s privacy “win” is that it at least allows you to turn tracking off with a simple checkbox. Most services don’t. I can’t find any hosted service that doesn’t offer tracking period, or even defaults to no tracking.

[Update: Justin Duke, on Twitter: “thanks for the buttondown mention! agreed that defaulting to opt out of tracking automatically is better: the current default wasn’t a deliberate choice so much as an artifact of the initial behavior’s implementation.” He’s changing the default to not use analytics, as of tonight. Nice!]

One message I’ve heard from folks who would know is that two of the reasons for the ubiquitous use of tracking pixels in newsletters are anti-spam tools (anti-anti-spam tools, really) and the expense of sending emails to people who never read them. Newsletters being flagged as spam — especially by major players like Gmail and Hotmail — is a never-ending game of whack-a-mole, and spy pixels help alert newsletter providers that their messages are being flagged. Expense-wise, those who send free newsletters want to cull from their lists people who never open them or click any of the links. Sending newsletters to thousands (let alone tens of thousands or more) of subscribers is, relatively speaking, expensive.

I’m sympathetic, but that’s a YP, not an MP, so fuck you and your tracking pixels. I’m blocking them and you should too.

But that’s why the world needs a company like Apple to take action. If Apple were to kneecap email tracking in Mail for Mac and iOS, the industry would have to adapt.

Twitter Teases Upcoming Features: Paid ‘Super Follows’ and Community Groups 

Jacob Kastrenakes, reporting for The Verge:

The payment feature, called Super Follows, will allow Twitter users to charge followers and give them access to extra content. That could be bonus tweets, access to a community group, subscription to a newsletter, or a badge indicating your support. In a mockup screenshot, Twitter showed an example where a user charges $4.99 per month to receive a series of perks. Twitter sees it as a way to let creators and publishers get paid directly by their fans.

Twitter also announced a new feature called Communities, which appear to be its take on something like Facebook Groups. People can create and join groups around specific interests — like cats or plants, Twitter suggests — allowing them to see more tweets focused on those topics. Groups have been a huge success for Facebook (and a huge moderation problem, too), and they could be a particularly helpful tool on Twitter, since the service’s open-ended nature can make it difficult for new users to get started on the platform.

Both these features sound great. Ben Thompson and I encouraged Twitter to do something like “Super Follows” a few weeks ago on Dithering. Almost certainly, though, all of this will only work in Twitter’s own client, not third-party apps like Tweetbot and Twitterrific.

Twitter hasn’t said how the economics will work — what cut of the money they’re going to take — but last month when they acquired paid-newsletter Substack rival Revue, they cut Revue’s take to just 5 percent. (Substack takes 10.)

‘Steve Jobs Stories’ on Clubhouse 

Computer History Museum:

Chris Fralic, Steven Levy, Esther Dyson, Mike Slade, John Sculley, Seth Godin, Andy Cunningham, Dan’l Lewin, Doug Menuez, Regis McKenna, Andy Hertzfeld, and Steven Rosenblatt share their “Steve Jobs Stories” in honor of what would have been the Apple cofounder’s 66th birthday.

I missed the first half of this show on Clubhouse, but caught the second half live. Easily the best event I’ve heard on Clubhouse. Good stories, well told. Nice job by the Computer History Museum getting this recorded and posted to YouTube for posterity.

El Toro ‘One-to-One IP Targeting’ 

“Ad tech” (read: spyware) company El Toro is just one company in an industry full of competitors, but their description of their capabilities struck me as particularly flagrant in its utter disregard for privacy:

As a marketing organization focused on sales not metrics, El Toro’s ad tech brings the location-specific accuracy of direct mail to digital advertising. Through our patented IP Targeting technology we target digital ads to your customer by matching their IP address with their physical address, bringing a wide variety of banner and display ads to the sites the targeted customer visits on the Internet.

Specifically, El Toro offers: Targeting without having to use cookies, census blocks, or geo-location tools.

They claim the ability not just to match your IP address to a general location, but to your exact home street address, and from there to specific devices within your home. Their pitch to would-be advertisers is that they can target you by IP address the same way marketers send all those print catalogs to your house. From their above-linked IP Targeting website:

The El Toro patented algoirthm [sic] uses 38+ points of data to match an IP to a household with 95% accuracy.

Do I believe they can match IPs to street addresses with 95 percent accuracy? No. I wouldn’t believe a word out of these guys’ mouths, to be honest. But the fact that they can do it with any degree of accuracy is a problem that needs to be solved.

Why doesn’t Apple build a VPN into its OSes? Or as an offering of paid iCloud accounts at least? At this point, if privacy truly is a paramount concern, it might be necessary to do everything over a trusted VPN. IP addresses are inherently not private.

From the DF Archive: Superhuman and Email Privacy 

Yours truly, back in July 2019:

They call them “read receipts”, and functionally they do work like read receipts, insofar as they indicate when you read a message. But real email read receipts are under the recipient’s control, and they’re a simple binary flag, read or unread  —  they don’t tell the sender how many times or when you view a message.

This post was about Superhuman in particular, but it applies to all email services using tracking pixels. Email has an official “read receipt” feature, a feature that is under the recipient’s control, as it should be. These spy pixels are a surreptitious circumvention.

I know that mailing list software generally includes tracking pixels. I don’t think that’s ethical either. On a personal level, though, with Superhuman, tracking when and how many times a recipient views a message is simply absurdly wrong.

It’s also something the vast, overwhelming majority of people don’t even realize is possible. I’ve told the basic Superhuman tracking story to a few people over the last few weeks, and asked whether they realized this was possible; all of them expressed shock and many of them outrage as well. Email should be private, and most people assume, incorrectly, that it is. You have to be a web developer of some sort to understand how this is possible. Email is supposed to be like paper mail  —  you send it, they get it, and you have no idea whether they read it or not. It bounces back to you if they never even receive it, say, because you addressed it incorrectly. The original conception of email is completely private.

But also, the original conception of email is that messages are plain text. No fonts, no styles, just plain text, with optional attachments. But those attachments are embedded in the message, not pulled from a server when the message is viewed.

Once we allowed email clients to act as de facto web browsers, loading remote content from servers when messages are viewed, we opened up not just a can of worms but an entire case of canned worms. Every privacy exploit for a web browser is now a privacy exploit for email. But it’s worse, because people naturally assume that email is completely private.

It’s a little depressing re-reading this piece today. Everything I’m arguing today, I argued then. Email privacy in the face of these trackers remains an industry-wide disgrace.


Apple Mail and Hidden Tracking Images

In my piece yesterday about email tracking images (“spy pixels” or “spy trackers”), I complained about the fact that Apple — a company that rightfully prides itself for its numerous features protecting user privacy — offers no built-in defenses for email tracking.

A slew of readers wrote to argue that Apple Mail does offer such a feature: the option not to load any remote resources at all. It’s a setting for Mail on both Mac and iOS, and I know about it — I’ve had it enabled for years. But this is a throwing-the-baby-out-with-bath-water approach. What Hey offers — by default — is the ability to load regular images automatically, so your messages look “right”, but block all known images from tracking sources (which are generally 1×1 px invisible GIFs).

Typical users are never going to enable Mail’s option not to load remote content. It renders nearly all marketing messages and newsletters as weird-looking at best, unreadable at worst. And when you get a message whose images you do want to see, when you tell Mail to load them, it loads all of them — including trackers. Apple Mail has no knowledge of spy trackers at all, just an all-or-nothing ability to turn off all remote images and load them manually.

Mail’s “Load remote content in messages” option is a great solution to bandwidth problems — remember to turn it on the next time you’re using Wi-Fi on an airplane, for example. It’s a terrible solution to tracking. No one would call it a good solution to tracking if Safari’s only defense were an option not to load any images at all until you manually click a button in each tab to load them all. But that’s exactly what Apple offers with Mail. (Safari doesn’t block tracking images, but Safari does support content blocking extensions that do — one solution for Mail would be to enable the same content blocker extensions in Mail that are enabled in Safari.)

How does Hey know which images are trackers and which are “regular” images? They can’t know with absolute certainty. But they’ve worked hard on this feature, and have an entire web page promoting it. From that page:

HEY manages this protection through several layers of defenses. First, we’ve identified all the major spy-pixel patterns, so we can strip those out directly. When we find one of those pesky pixels, we’ll tell you exactly who put it in there, and from what email application it came. Second, we bulk strip everything that even smells like a spy pixel. That includes 1x1 images, trackers hidden in code, and everything else we can do to protect you. Between those two practices, we’re confident we’ll catch 98% of all the tracking that’s happening out there.

But even if a spy pixel sneaks through our defenses (and we vow to keep them updated all the time!), you’ll have an effective last line of defense: HEY routes all images through our own servers first, so your IP address never leaks. This prevents anyone from discovering your physical location just by opening an email. Like VPN, but for email.

Apple should do something similar: identify and block spy trackers in email by default, and route all other images through an anonymizing proxy service.1 And, like Hey, they should flag all emails containing known trackers with a shame badge. It’s a disgraceful practice that has grown to be accepted industry-wide as standard procedure, because the vast majority of users have no idea it’s even going on. Through reverse IP address geolocation, newsletter and marketing email services track not just that you opened their messages, but when you opened them, and where you were (to the extent that your IP address reveals your location).

No thanks. Apple should offer defenses against email tracking just as robust as Safari’s defenses against web tracking.2 


  1. Gmail has been proxying remote images in messages since 2013↩︎

  2. Don’t get me started on how predictable this entire privacy disaster was, once we lost the war over whether email messages should be plain text only or could contain embedded HTML. Effectively all email clients are web browsers now, yet don’t have any of the privacy protection features actual browsers do. ↩︎︎


The Apple Store App Has an Easter Egg 

Search for “10 years” and you get a fun animation. Any others?

Updates:

The Hidden Message in the Parachute of NASA’s Mars Rover 

Joey Roulette, writing for The Verge:

The parachute that helped NASA’s Perseverance rover land on Mars last week unfurled to reveal a seemingly random pattern of colors in video clips of the rover’s landing. But there was more to the story: NASA officials later said it contained a hidden message written in binary computer code.

Internet sleuths cracked the message within hours. The red and white pattern spelled out “Dare Mighty Things” in concentric rings. The saying is the Perseverance team’s motto, and it is also emblazoned on the walls of Mission Control at NASA’s Jet Propulsion Laboratory (JPL), the mission team’s Southern California headquarters.

The parachute’s outer ring appears to translate to coordinates for JPL: 34°11′58″ N 118°10′31″ W.

Tonya Fish posted a handy guide on Twitter (also available as a PDF) explaining how the code works. (Via Kottke.)

Seems sad to me that NASA and JPL are willing to have some fun with clever Easter eggs with a Mars rover, yet Apple, of all companies, no longer does any Easter eggs at all. Computers are supposed to be fun.

BBC News: ‘Spy Pixels in Emails Have Become Endemic’ 

Speaking of Hey, BBC News ran a piece on email spy pixels last week:

The use of “invisible” tracking tech in emails is now “endemic”, according to a messaging service that analysed its traffic at the BBC’s request. Hey’s review indicated that two-thirds of emails sent to its users’ personal accounts contained a “spy pixel”, even after excluding for spam. […]

Defenders of the trackers say they are a commonplace marketing tactic. And several of the companies involved noted their use of such tech was mentioned within their wider privacy policies.

“It’s in our privacy policy” is nonsense when it comes to email spy pixels. It’s nonsense for most privacy policies, period, because most privacy policies are so deliberately long, opaque, and abstruse as to be unintelligible. But with email they’re absurd. The recipient of an email containing a tracking pixel never agreed to any privacy policy from the sender.

And “it’s a commonplace marketing tactic” is not a defense. It’s an excuse, but it’s a shitty one. It just shows how out of control the entire tracking industry is. Their justification for all of it is, effectively, “It’s pervasive so it must be OK.” That’s like saying back in the 1960s that most people smoke so it must be safe. Or that most people don’t wear seat belts so that must be safe.

Emails pixels can be used to log:

  • if and when an email is opened
  • how many times it is opened
  • what device or devices are involved
  • the user’s rough physical location, deduced from their internet protocol (IP) address - in some cases making it possible to see the street the recipient is on

Hey’s default blocking of spy pixels — along with displaying a prominent badge shaming the sender for using them — is one of its best features. Apple should take a long hard look at Mail and the way that it does nothing to protect users’ privacy from these trackers. They’re insidious and offensive.

‘Hey, World!’ 

Jason Fried, on an experimental blogging service Basecamp has built into their email service Hey:

So we set out to do it. To test the theory. And over the last few weeks we built it into HEY, our new email service. We’re calling the feature HEY World. This post you’re reading right now is the world’s first HEY World post. And I published it by simply emailing this text directly to [email protected] from my [email protected] account. That was it.

For now, this remains an experiment. I’ve got my own HEY World blog, and David has his. We’re going to play for a while. And, if there’s demand, we’ll roll this out to anyone with a personal @hey.com account. It feels like Web 1.0 again in all the right ways. And it’s about time.

Speaking of Web 1.0, HEY World pages are lighting fast. No javascript, no tracking, no junk. They’re a shoutout to simpler times. Respect.

You can subscribe to a Hey World blog via email (of course) or RSS. Feels like simple stuff — like RSS — is experiencing a renaissance.

‘Hello, World’ 

MIT’s Computer Science & Artificial Intelligence Lab:

Today’s the day that “hello world” said “hello world!”

The term was coined in a textbook published #otd in 1978: “C Programming Language,” written by Brian Kernighan and Dennis Ritchie.

Tweeted yesterday, so it’s no longer “on this day”, sorry, but interesting history nonetheless.

I still write “Hello, world” as a first exercise in any new language or programming environment. Not a superstition per se, but more like a talisman. Just seems like the right thing to do.

The C Programming Language is a wonderfully-written book. It explains the basics of C better than anything I’ve ever seen. C is a weird, hard language but K&R describe it with joy. It’s a serious book written in a conversational style.

‘I’m Being Censored, and You Can Read, Hear, and See Me Talk About It in the News, on the Radio, and on TV’ 

Eli Grober, writing for McSweeney’s:

Hi there, thanks for reading this. I’m being censored. That’s why I’m writing a piece in a major publication that you are consuming easily and for free. Because I am being absolutely and completely muzzled.

Also, I just went on a massively-watched TV show to let you know that my voice is being down-right suffocated. I basically can’t talk to anyone. Which is why I’m talking to all of you.

As Jeanetta Grace Susan has convincingly argued, conservative voices are being silenced.

500,000 Lives Lost 

Staggering, sobering data visualization from Reuters.

Mux Video 

My thanks to Mux for once again sponsoring DF last week. Mux Video is an API to powerful video streaming — think of it as akin to Stripe for video — built by the founders of Zencoder and creators of Video.js, and a team of ex-YouTube and Twitch engineers. Take any video file or live stream and make it play beautifully at scale on any device, powered by magical-feeling features like automatic thumbnails, animated GIFs, and data-driven encoding decisions.

Spend your time building what people want, not drudging through ffmpeg documentation.


How ‘Unlock With Apple Watch’ While Wearing a Face Mask Works in iOS 14.5

I don’t generally write about features in beta versions of iOS. In fact, I don’t generally install beta versions of iOS, at least on my main iPhone. But the new “Unlock With Apple Watch” feature, which kicks in when you’re wearing a face mask, was too tempting to resist.

First things first: to use this feature, you need to install iOS 14.5 on your iPhone and WatchOS 7.4 on your Apple Watch (both of which are, at this writing, on their second developer betas). So far, for me, these OS releases have been utterly reliable. Your mileage may vary, and running a beta OS on your daily-carry devices is always at your own risk. But I think the later we go in OS release cycles, the more stable the betas tend to be. Over the summer, between WWDC and the September (or October) new iPhone event, iOS releases can be buggy as hell. The x.1 releases are usually the stable ones, and the releases after that tend to be very stable in beta — Apple uses these releases to fix bugs and to add new features that are stable. If anything, I think iOS 14.5 is very stable technically, and only volatile politically, with the new opt-in requirement for targeted ad user tracking.

After using this feature for a few weeks now, I can’t see going back. As the designated errand runner in our quarantined family, it’s a game changer. Prior to iOS 14.5, using a Face ID iPhone while wearing a face mask sucked. Every single time you unlocked your phone, you needed to enter the passcode/passphrase. The longer your passcode, the more secure it is (of course), but the more annoying it is to enter incessantly.

“Unlock With Apple Watch” eliminates almost all of that annoyance. It’s that good. It’s optional (as it should be), and off by default (also as it should be, for reasons explained below). It’s easy to turn on in Settings on your iPhone: go to Face ID & Passcode, enter your passcode, and scroll down to the “Unlock With Apple Watch” section, where you’ll find toggles for each Apple Watch (running WatchOS 7.4 or later) paired with your iPhone.

Here is how the feature seems to work.

  1. Does Face ID work normally? I.e. is the face in front of the phone you, the owner, and are you not wearing a mask? If so, unlock normally. Normal non-mask Face ID is unchanged when this feature is enabled.

  2. If Face ID fails, is there a face wearing a mask in front of the phone? If so, is an authorized Apple Watch in a secure state (i.e. the watch itself is unlocked and on your wrist) and very close to the iPhone? If so, unlock, and send a notification to the watch stating that the watch was just used to unlock this iPhone. The notification sent to the watch includes a button to immediately lock the iPhone.

Because it’s a two-step process (step #1 first, then step #2), it does take a bit longer than Face ID without a mask (which is really just step #1). But it works more than fast enough to be a pleasant convenience experience. Regular Face ID is so fast you forget it’s even there; “Unlock With Apple Watch” is slow enough that you notice it’s there, but fast enough that it isn’t a bother.

It’s important to note that in step #2, it works with any face wearing a mask. It’s not trying to do a half-face check that your eyes and forehead look like you, or anything like that. My iPhone will unlock if my wife or son is the face in front of my iPhone — but only if they’re wearing a mask, and only if my Apple Watch is very close to the phone. I’d say less than 1 meter — pretty much about what you would think the maximum distance would be between a watch on one wrist and an iPhone in the other hand.

When this feature kicks in, you always get a wrist notification telling you it happened, with just one button: “Lock iPhone”. If you tap this button, the iPhone is immediately hard-locked and requires your passcode to be re-entered even if you take your mask off. (It’s the same hard-locked mode you can put your iPhone into manually by pressing and holding the power button and one of the volume buttons — a good tip to remember when going through a security checkpoint or any other potential encounter with law enforcement.)

I’m not sure if anyone will be annoyed by this mandatory wrist notification, but they shouldn’t be, and it shouldn’t be optional. You want this notification every time to prevent anyone from surreptitiously unlocking your iPhone near you, just by putting a face mask on.

Also, if your Apple Watch is in Sleep mode (the bed icon in WatchOS’s Control Center), the feature does not work.

It’s occasionally slow. And two or three times, I got a message on my iPhone that my watch was too far away for the feature to work, even though I raised my watch-wearing wrist next to the phone. These hiccups were rare, and to my recollection, I only ran into them with iOS 14.5 beta 1, not beta 2.

Even in the worst case scenario, where the feature doesn’t work, you’re no worse off than you were before the feature existed: you simply have to manually enter your phone’s passcode.

Last but not least, the “Unlock With Apple Watch” feature very specifically seems to be looking for a face wearing a face mask. The feature does not kick in if Face ID fails for any other reason — like, say, if you’re wearing sunglasses with lenses that Face ID can’t see through. (I wish they’d make this work with sunglasses, too.)

Addenda

Throwing Shade: There seems to be some confusion over what I’m asking for w/r/t sunglasses. Face ID has always supported an option to turn off “Require Attention for Face ID”. When off, Face ID will work even if it doesn’t detect your eyes looking at the screen. (It’s an essential accessibility feature for people with certain vision problems.) If you own sunglasses that the iPhone’s TrueDepth camera system can’t “see” through, you can disable “Require Attention for Face ID” to allow Face ID to work while you’re wearing your shades.

This is far from ideal though, because it weakens Face ID all the time, not just when you’re wearing sunglasses. What’s nice about the new “Unlock With Apple Watch” feature is that it only applies when you’re wearing a mask and your Apple Watch. What I’m saying I’d like to see Apple support is an extension of “Unlock With Apple Watch” that would do the same thing for sunglasses that it currently does for face masks. I’ve heard from readers who have trouble with Face ID when wearing their motorcycle helmets, too, and I’m sure there are other examples. Basically, I’d like to see Apple add the option of trusting your Apple Watch to unlock your iPhone in more scenarios where your face can’t be recognized. My request is very different from, and more secure than, the existing “Require Attention” feature.

(Speaking of which, while wearing a mask, “Unlock With Apple Watch” does not check for whether your eyes are looking at the display, regardless of your setting for “Require Attention for Face ID”. Again, this makes sense, because it’s not Face ID — “Unlock With Apple Watch” is an alternative authentication method that kicks in after Face ID has failed.)

Apple Pay: I didn’t mention the fact that “Unlock With Apple Watch” does not work with Apple Pay. This makes sense, because however secure “Unlock With Apple Watch” is (and I think it’s quite secure), it’s not as secure as Face ID authenticating your actual face. For payments, you obviously want the highest level of secure authentication.

Also, for Apple Pay, if you’re wearing your Apple Watch (a requirement for “Unlock With Apple Watch”), you can just use your Apple Watch for Apple Pay.

It also doesn’t work with apps that use Face ID for authentication within them. Banking apps, for example, or unlocking locked notes in Apple Notes. But this makes sense too — the feature is specifically called “Unlock With Apple Watch”. It unlocks your phone, that’s it. Anything else that requires Face ID for secure authentication still requires Face ID. 


The Talk Show: ‘Peak Hubris’ 

Christina Warren returns to the show to talk about Apple Car, Apple TV, Clubhouse, and Bloomberg hamfistedly revisiting “The Big Hack”.

Sponsored by:

  • Squarespace: Make your next move. Use code talkshow for 10% off your first order.
  • Linode: Instantly deploy and manage an SSD server in the Linode Cloud. New accounts can get $100 credit.
  • Flatfile: Spend less time formatting spreadsheet data, and more time using it.
Tim Berners-Lee Worries Australian Law Could Make the Web ‘Unworkable’ 

Anthony Cuthbertson, reporting for The Independent:

“Specifically, I am concerned that that code risks breaching a fundamental principle of the web by requiring payment for linking between certain content online,” Berners-Lee told a Senate committee scrutinizing a bill that would create the New Media Bargaining Code.

If the code is deployed globally, it could “make the web unworkable around the world”, he said.

It’s a question dividing proponents and critics of the proposed Australian law: does it effectively make Google and Facebook “pay for clicks” and might it be the beginning of the end of free access?

I don’t know what this Berners-Lee guy knows about the web, but I agree.

Rich Mogull on Apple’s Updated 2021 Platform Security Guide 

Rich Mogull, writing at TidBits, on Apple’s 2021 Platform Security Guide:

As wonderful as the Apple Platform Security guide is as a resource, writing about it is about as easy as writing a hot take on the latest updates to the dictionary. Sure, the guide has numerous updates and lots of new content, but the real story isn’t in the details, but in the larger directions of Apple’s security program, how that impacts Apple’s customers, and what it means to the technology industry at large.

From that broader perspective, the writing is on the wall. The future of cybersecurity is vertical integration. By vertical integration, I mean the combination of hardware, software, and cloud-based services to build a comprehensive ecosystem. Vertical integration for increased security isn’t merely a trend at Apple, it’s one we see in wide swaths of the industry, including such key players as Amazon Web Services. When security really matters, it’s hard to compete if you don’t have complete control of the stack: hardware, software, and services.

Apple Cracks Down on Apps With ‘Irrationally High Prices’ as App Store Scams Are Exposed 

Guilherme Rambo, writing for 9to5Mac:

App Store scams have recently resurfaced as a developer exposed several scam apps in the App Store making millions of dollars per year. Most of these apps exploit fake ratings and reviews to show up in search results and look legit, but trick users into getting subscriptions at irrationally high prices.

It looks like Apple has started to crack down on scam attempts by rejecting apps that look like they have subscriptions or other in-app purchases with prices that don’t seem reasonable to the App Review team.

From the rejection letter sent by the App Store review team:

Customers expect the App Store to be a safe and trusted marketplace for purchasing digital goods. Apps should never betray this trust by attempting to rip-off or cheat users in any way.

Unfortunately, the prices you’ve selected for your app or in-app purchase products in your app do not reflect the value of the features and content offered to the user. Charging irrationally high prices for content or services with limited value is a rip-off to customers and is not appropriate for the App Store.

Specifically, the prices for the following items are irrationally high:

This is exactly the sort of crackdown I’ve been advocating for years. A bunco squad that looks for scams, starting with apps that (a) have high-priced in-app purchases and subscriptions, and (b) are generating a lot of money. Ideally Apple will crack down on all scams, but practically speaking, all that matters is that they identify and eliminate successful scams — and identify the scammers behind them and keep them out of the store.

Developer Kosta Eleftheriou has been righteously leading a sort of indie bunco squad for a few weeks, identifying a slew of scams (usually involving apps with clearly fraudulent ratings, too).

Nomination for Lede of the Year 

Ashley Parker, reporting for The Washington Post:

Usually, it takes at least one full day in Cancun to do something embarrassing you’ll never live down.

But for Ted Cruz (R-Tex.), it took just 10 hours — from when his United plane touched down at Cancun International Airport at 7:52 p.m. Wednesday to when he booked a return flight back to Houston around 6 a.m. Thursday — for the state’s junior senator to apparently realize he had made a horrible mistake.

Give Cruz credit for this: he’s brought the whole nation together in unity.

Pfizer’s Vaccine Works Well With One Dose 

The New York Times:

A study in Israel showed that the vaccine is robustly effective after the first shot, echoing what other research has shown for the AstraZeneca vaccine and raising the possibility that regulators in some countries could authorize delaying a second dose instead of giving both on the strict schedule of three weeks apart as tested in clinical trials. […]

Published in The Lancet on Thursday and drawing from a group of 9,100 Israeli health care workers, the study showed that Pfizer’s vaccine was 85 percent effective 15 to 28 days after receiving the first dose. Pfizer and BioNTech’s late-stage clinical trials, which enrolled 44,000 people, showed that the vaccine was 95 percent effective if two doses were given three weeks apart. […]

Pfizer and BioNTech also announced on Friday that their vaccine can be stored at standard freezer temperatures for up to two weeks, potentially expanding the number of smaller pharmacies and doctors’ offices that could administer the vaccine, which now must be stored at ultracold temperatures.

The U.S. needs to change its policy and get more shots into more arms as quickly as possible. Administer the second booster shots in the summer after a majority of Americans have gotten their first. The current policy is simply wrong, given the data, and is halving the rate at which we can achieve herd immunity.

Tucker Carlson Detects Other Suspicious Behaviors 

If we were to debate which newspaper is better, The New York Times or Washington Post, Alexandra Petri would be one of my top arguments in favor of the Post.

Bruce Blackburn, Designer of Ubiquitous NASA Logo, Dies at 82 

A bit of sad NASA-related news today, too:

Bruce Blackburn, a graphic designer whose modern and minimalist logos became ingrained in the nation’s consciousness, including the four bold red letters for NASA known as the “worm” and the 1976 American Revolution Bicentennial star, died on Feb. 1 in Arvada, Colo., near Denver. He was 82. […]

In a design career of more than 40 years, Mr. Blackburn developed brand imagery for clients like IBM, Mobil and the Museum of Modern Art. But he is best known for the NASA worm, which has become synonymous with space exploration and the concept of the technological future itself.

I’m glad he lived long enough to see NASA re-embrace his wonderful logo. It’s such a perfect mark — one that will always feel like a symbol of the future.

Update: NASA’s 1976 “Graphics Standards Manual” — 60-page document on how to use the logo. This is how you do it.

NASA’s Perseverance Rover Lands on Mars 

Kenneth Chang, reporting for The New York Times:

NASA safely landed a new robotic rover on Mars on Thursday, beginning its most ambitious effort in decades to directly study whether there was ever life on the now barren red planet.

While the agency has completed other missions to Mars, the $2.7 billion robotic explorer, named Perseverance, carries scientific tools that will bring advanced capabilities to the search for life beyond Earth. The rover, about the size of a car, can use its sophisticated cameras, lasers that can analyze the chemical makeup of Martian rocks and ground-penetrating radar to identify the chemical signatures of fossilized microbial life that may have thrived on Mars when it was a planet full of flowing water.

Great landing, and a great day for science.

More here, from NASA’s own website.

‘Smart’ TVs Track Everything You Watch 

Geoffrey Fowler, writing for The Washington Post back in September 2019:

Lately I’ve been on the hunt for what happens to my data behind the cloak of computer code and privacy policies. So I ran an experiment on my own Internet-connected Samsung, as well as new “smart TV” models from four of the best-selling brands: Samsung, TCL Roku TV, Vizio and LG.

I set up each smart TV as most people do: by tapping “OK” with the remote to each on-screen prompt. Then using software from Princeton University called the IoT Inspector, I watched how each model transmitted data. Lots went flying from streaming apps and their advertising partners. But even when I switched to a live broadcast signal, I could see each TV sending out reports as often as once per second.

When tracking is active, some TVs record and send out everything that crosses the pixels on your screen. It doesn’t matter whether the source is cable, an app, your DVD player or streaming box.

Every damn second. Disconnect your TV from the internet and use a set top box or stick with some degree of privacy you can control. Even if you’re not worried about the privacy angle, it’s just a waste of bandwidth. And even if you’re not that concerned with the bandwidth, per se, it’s just obnoxious. It should bother you on an aesthetic sense alone to have a TV set needlessly phoning home constantly to send analytics that don’t help you at all.

Roku Streaming Devices Default to ‘Scary’ Privacy 

Mozilla’s Privacy Not Included project’s take on Roku:

Roku is the nosey, gossipy neighbor of connected devices. They track just about everything! And then they share that data with way too many people. According to Roku’s privacy policy, they share your personal data with advertisers to show you targeted ads and create profiles about you over time and across different services and devices. Roku also gives advertisers detailed data about your interactions with advertisements, your demographic data, and audience segment. Roku shares viewing data with measurement providers who may target you with ads. Roku may share your personal information with third parties for their own marketing purposes. One of the researchers working on this guide said, “It had such a scary privacy policy, I didn’t even connect it to my TV.” Another researcher referred to Roku as a “privacy nightmare.”

You can opt-out, but they won’t ask you. You have to go look for it, which means most Roku users don’t even know they’re being snooped on this way.

Most (all?) major smart TVs are privacy disasters too. Privacy is probably the main Apple TV advantage I didn’t mention the other day when speculating on why Apple TV even still exists. But even on an Apple TV box, you’re at the mercy of each app you use, and the major streaming services all collect information on everything you do. I mean, how else would their recommendation algorithms work? Or even just picking from where you left off in a movie you paused a day or two ago?

But Roku (and similar boxes, and smart TVs) track you at the system level.

I don’t let my LG TV connect to the internet. I mean why would I, if I don’t use its built-in apps for anything?

Apple TV+ Is Now Available on Google TV 

Jonathan Zepp, writing on the Google Blog:

Starting today, the Apple TV app, including Apple TV+, is now globally available on the new Chromecast with Google TV, with more Google TV devices to come. To access the Apple TV app, navigate to the Apps tab or the apps row in the For you tab.

What’s left on the list of devices where Apple TV could be available but isn’t? Nintendo Switch — but they don’t even have Netflix. What else?

‘Facebook Calls Australia’s Bluff’ 

Casey Newton, writing at Platformer:

On Wednesday morning, the splintering arrived: Google cut a deal with News Corp. that will ensure its services continue to be provided in Australia, and Facebook walked away from the bargaining table and began preventing people from sharing news links from Australian publishers around the world.

I think Facebook basically did the right thing, and Google basically did the wrong thing, even though Google had a much tougher call to make. Today, let’s talk about why the tech giants made the decisions that they did, why Australia’s shakedown is rotten, and what’s likely to happen next.

Calling Australia’s bluff is exactly the right framing. What’s surprising is that Australian government officials (and others around the world, like David Cicilline, chairman of the U.S. House Antitrust Subcommittee), didn’t even see it as a bluff that could be called. The mindset behind this law seemed to be that Australia could demand whatever crazy stuff they wanted (like Facebook being required to pay major news organizations just for links to their articles — which the news organizations themselves would be free to post to their own Facebook accounts) and Facebook and Google would just say “OK, sure.”