eBay Is Delisting Dr. Seuss Books Taken Out of Print This Week 

Jeffrey A. Trachtenberg, reporting for The Wall Street Journal:

Online marketplace eBay Inc. said it is working to prevent the resale of six Dr. Seuss books that were pulled earlier this week by the company in charge of the late author’s works because they contain offensive imagery.

“EBay is currently sweeping our marketplace to remove these items,” a spokeswoman for the company said in an email. New copies of the six books were no longer for sale online at major retailers such as Barnes & Noble on Thursday afternoon, which put eBay among the most prominent platforms for the books to be sold.

Harry McCracken:

Ending publication of these books is a reasonable move, but I’m not sure about eBay’s logic here, other than avoiding bad PR. Unless it now wants to police its site and remove every old item containing offensive stereotypes, which would be a LOT of stuff.

(BTW, I would also have been fine with Dr. Seuss Enterprises revising these books to remove the stereotypes. They’ve already tampered with the Dr.’s legacy in a zillion ways that bother me a lot more.)

I agree with McCracken on both points. I mean, you can buy copies of Mein Kampf but not If I Ran the Zoo? Banning books is always a sign of out-of-control zealotry.

The EFF: ‘Google’s FLoC Is a Terrible Idea’ 

Bennett Cyphers, writing for the EFF:

Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn’t learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC), which is perhaps the most ambitious — and potentially the most harmful.

FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting.

Google’s pitch to privacy advocates is that a world with FLoC (and other elements of the “privacy sandbox”) will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between “old tracking” and “new tracking.” It’s not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.

This helps explain Google’s message yesterday. They’re moving their tracking from third-party cookies that they process through the cloud to tracking that’s done in Chrome. Basically I think that’s it.

If you prefer a Chromium-based browser, you should use one other than Chrome. I like Brave for my (occasional) Chromium browser needs, but Microsoft’s Edge might be a good choice too. Brave bills itself as a privacy browser; at this point it seems fair to say Google is turning Chrome into an anti-privacy browser. It’s that simple.


Apple’s App Store Privacy Nutrition Labels Depend on the Honor Policy, and, Surprising No One, Some Developers Are Dishonest

Speaking of Geoffrey Fowler, he had an interesting premise in a column last month: are the new privacy “nutrition” labels Apple is requiring developers to supply accurate?

I downloaded a de-stressing app called the Satisfying Slime Simulator that gets the App Store’s highest-level label for privacy. It turned out to be the wrong kind of slimy, covertly sending information — including a way to track my iPhone — to Facebook, Google and other companies. Behind the scenes, apps can be data vampires, probing our phones to help target ads or sell information about us to data firms and even governments.

As I write this column, Apple still has an inaccurate label for Satisfying Slime. And it’s not the only deception. When I spot-checked what a couple dozen apps claim about privacy in the App Store, I found more than a dozen that were either misleading or flat-out inaccurate. They included the popular game Match 3D, social network Rumble and even the PBS Kids Video app. (Say it ain’t so, Elmo!) Match and Rumble have now both changed their labels, and PBS changed some of how its app communicates with Google.

The PBS Kids Video app is eyebrow-raising, but it seems to have been a genuine mistake on PBS’s part:

You can spot the squishiness of the labels in a back-and-forth I had with PBS about the app store listing for its popular PBS Kids Video app. We found the app sending my phone’s ID to Google, even though its label said it didn’t collect data that could be linked to me. PBS told me the label reflected an update to the app it eventually published on Jan. 28, in which Google no longer gets sent my ID but still helps PBS measure performance.

Effectively PBS submitted a privacy nutrition label based on changes to their app that weren’t yet — but soon were — live in the App Store. The rest of the inaccurate nutrition labels Fowler found are rather obscure apps.

Fowler concludes that these labels are useless if they’re not guaranteed to be accurate. There ought to be penalties for falsifying information on these labels. But it clearly isn’t practical for Apple to verify every label for every app in the store. I don’t think that’s any different from the mandatory nutrition labels on food products. The FDA doesn’t verify those labels — it’s the threat of penalties and bad publicity that motivate companies to report accurate information on them. I don’t know anyone who thinks mandatory food nutrition labels are useless, even though surely many of them contain incorrect information.

And if Apple’s new privacy labels are useless, why are so many apps making changes to their actual privacy policies? Would PBS have removed the tracking identifier from its PBS Kids app in the first place? I’m guessing not. It’s good to raise awareness that the information on these labels is self-reported by the developers, and that Apple doesn’t (and practically speaking can’t) verify them technically, but I think we’re already seeing clear evidence that they’re motivating developers to remove or reduce privacy-invasive tracking from their apps.

This point from Fowler, however, I agree is a major shortcoming:

Even with its update, the label is still missing an important piece of information: There’s Google inside.

Nowhere on any of Apple’s privacy labels, in fact, do we learn with whom apps are sharing our data. Imagine if nutrition facts labels left off the whole section about ingredients.

Apple’s next crack at these labels should make it mandatory to list exactly which third-parties data is shared with. 


Google’s Search Results Have Gotten Worse 

Geoffrey Fowler, writing for The Washington Post back in October:

Over the last two decades, Google has made changes in drips rather than big makeovers. To see how search results have changed, what you’d need is a time machine. Good news: We have one of those!

The Internet Archive’s Wayback Machine stored some Google search results over the years. When we look back, a picture emerges of how Google increasingly fails us. There’s more space dedicated to ads that look like search results. More results start with answer “snippets” — sometimes incorrect — ripped from other sites. And increasingly, results point you back to Google’s own properties such as Maps and YouTube, where it can show more ads and gather more of your data.

I’d say Google’s biggest weakness in search isn’t that would-be competitors have gotten better, but what Fowler illustrates here: Google’s own search results have clearly gotten worse. The comparison of how low they sometimes push the top actual result are eye opening. It’s been a slow boil from the Google of old to today, but if you took a Google search user from 2005 and showed them Google search today, they’d think it was halfway to Idiocracy. (Personally, I think it seems clear that the quality of Google search results — or at least the presentation of those results — started its decline when Marissa Mayer left Google to become CEO of Yahoo in 2012.)

This, effectively, is why I’ve been happy using DuckDuckGo as my default search engine for years now. I don’t think the breadth or accuracy of their actual search results is as good as Google’s, but because their presentation of results is better — far less cluttered, often with no ads in the results at all, never with more than two ads — I find the overall experience to be better, even putting aside all my concerns about Google and privacy.

The other thing I wonder about is how much modern web browsers have broken typical users of the habit of “going to Google”. How many people actually go to google.com to search, and how many just type search terms in the browser location field? If most people just type search terms in the location field, a browser that switches from Google to another engine by default will switch those users automatically. How many people would even notice a switch given that nearly all search engines style results in a generally Google-like way?

Brave Buys a Search Engine, Promises No Tracking, No Profiling 

Thomas Claburn, writing for The Register:

Brave intends to make Tailcat the foundation of its own search service, Brave Search. The company hopes that its more than 25 million monthly active Brave customers will, after an initial period of testing and courtship, choose to make Brave Search their default search engine and will use it alongside other parts of its privacy-oriented portfolio, which also includes Brave Ads, news reader Brave Today, Brave Firewall+VPN, and video conferencing system Brave Together.

Brave Search, the company insists, will respect people’s privacy by not tracking or profiling those using the service. And it may even offer a way to end the debate about search engine bias by turning search result output over to a community-run filtering system called Goggles.

The service will, eventually, be available as a paid option — for those who want to pay for search results without ads — though its more common incarnation is likely to be ad-supported, in conjunction with Brave Ads. The latter offers participants the option to receive 70 per cent of the payment made by the advertiser in a cryptocurrency called BAT (Brave Attention Token).

When, if ever, will popular browsers start defaulting to search engines other than Google? That’s the question.

And for Apple in particular, it’s a question of an enormous sum of money. The exact figure Google pays Apple in traffic acquisition costs as a result of it being the default search engine in Safari (which in turn is the default browser on iOS and the Mac) is a tightly held secret. But Goldman Sachs analyst Rod Hall estimated the figure at $9.5 billion for 2018 and $12 billion for 2019.

Putting aside the question of whether any non-Google search engine provides good enough search results to replace Google as Safari’s default — a huge question! — if Apple were to make such a move in the name of privacy, it almost certainly would come as a multi-billion dollar annual hit to the company’s Services revenue.

Apple’s total Services revenue for FY2020 was about $54 billion. Would they take a $10 billion hit to that in the name of privacy? (Perhaps more interesting to flip the question around: If they care so deeply about privacy as a human right, why haven’t they already?)


Google’s Outsized Share of Advertising Money

Sam Schechner and Keach Hagey, reporting yesterday for The Wall Street Journal (News+ link):

Google’s heft means the change could reshape the digital ad business, where many companies rely on tracking individuals to target their ads, measure the ads’ effectiveness and stop fraud. Google accounted for 52% of last year’s global digital ad spending of $292 billion, according to Jounce Media, a digital ad consultancy.

About 40% of the money that flows from advertisers to publishers on the open internet — meaning digital advertising outside of closed systems such as Google Search, YouTube or Facebook — goes through Google’s ad buying tools, according to Jounce.

I linked to this same story yesterday, when writing about Google’s opaque announcement about their advertising plans in a world where third-party cookies no longer work in Chrome. I’ve been thinking ever since about the size of these figures. Even if we take these estimates from Jounce with some sort of grain of salt, these are huge figures.

At a certain level it just doesn’t feel justified that Google should be involved with this much of the world’s advertising spend. Fundamentally, the money should be going from advertisers to content makers who are displaying the ads. Ad revenue should be, to some degree, commensurate with attention share. Google garners a humongous share of the world’s daily attention, but not half. Not even close. Google has inserted itself into the middle, yet is taking far more than a middleman-sized share of the money. It’s like finding out that half the money spent on TV advertising wasn’t going to the channels where the commercials appeared, but to the cable companies. Or that most of the money spent on newspaper ads — trying to reach newspaper readers — was going not to the newspapers but to the company that runs the presses where the papers get printed.

User tracking is fundamental to that. The desire to know as much information as possible about the audience for advertising has always been the Pied Piper lure of the industry. And Google’s ability — along with Facebook’s — to actually provide that tracking (or the fraudulent illusion of it) is what enabled them to gobble up such an outsized portion of the world’s entire ad spend. The ads that appear on Google’s own properties are one thing: search result ads and YouTube ads come to mind. But Google and Facebook’s share of ad revenue spent trying to reach people on non-Google/non-Facebook properties seems fundamentally inequitable.

What if the answer is that there’s no way for Google (or Facebook) to make the sort of money they’ve been making in a technology and cultural environment that has become deeply concerned with online privacy? I think it’s possible that we can have a world where our online activities are far more private, or a world where Google and Facebook can maintain their current outsized share of worldwide ad spending, but not both.

A world where Google sees, say, 25 percent of the world’s ad spending sounds like an amazing business, in principle. Unless you’re comparing it to the world we’re in today, where they see 50 percent — then 25 percent looks like a collapse. Privacy-invasive user tracking is to Google and Facebook what carbon emissions are to fossil fuel companies — a form of highly profitable pollution that for a very long time few people in the mainstream cared about, but now, seemingly suddenly, very many care about quite a bit. 


NYT: ‘Colleges That Require Coronavirus Screening Tech Struggle to Say Whether It Works’ 

Natasha Singer and Kellen Browning, reporting for The New York Times:

Before the University of Idaho welcomed students back to campus last fall, it made a big bet on new virus-screening technology. The university spent $90,000 installing temperature-scanning stations, which look like airport metal detectors, in front of its dining and athletic facilities in Moscow, Idaho. When the system clocks a student walking through with an unusually high temperature, the student is asked to leave and go get tested for Covid-19.

But so far the fever scanners, which detect skin temperature, have caught fewer than 10 people out of the 9,000 students living on or near campus. Even then, university administrators could not say whether the technology had been effective because they have not tracked students flagged with fevers to see if they went on to get tested for the virus. […]

“So why are we bothering?” said Bruce Schneier, a prominent security technologist who has described such screening systems as “security theater” — that is, tools that make people feel better without actually improving their safety. “Why spend the money?”

Maybe “COVID theater” instead of “security theater”, but these technology purchases look like a whole lot of bullshit, just like the exposure notification apps for phones. We don’t need any of this. What we need are vaccinations, a few months of patience until more of those vaccinations are administered, and good serious plans for future outbreaks. If institutions like colleges want to spend money in the short term, they should spend the money on widespread COVID testing.

Apple Clarifies When It Locks Your Apple ID Because You Owe Them Money 

Statement from Apple to 9to5Mac, regarding yesterday’s much-publicized story about Dustin Curtis getting locked out of his Apple ID:

We apologize for any confusion or inconvenience we may have caused for this customer. The issue in question involved a restriction on the customer’s Apple ID that disabled App Store and iTunes purchases and subscription services, excluding iCloud. Apple provided an instant credit for the purchase of a new MacBook Pro, and as part of that agreement, the customer was to return their current unit to us. No matter what payment method was used, the ability to transact on the associated Apple ID was disabled because Apple could not collect funds. This is entirely unrelated to Apple Card.

Seems like a more reasonable situation than it first appeared, but, still, good to know that this is how it works.

The heart of Curtis’s saga is that he got instant credit for an old MacBook, didn’t send it back to Apple on time, and changed the bank account backing his credit card so Apple’s chargeback for the device trade-in didn’t take. When I, or family members, have sent devices in for trade-in (iPhones, usually), we haven’t been credited for the trade-in until after Apple has acknowledged receiving the old device.

It’s Now March and Most of Google’s Flagship iOS Apps Still Don’t Have Privacy Nutrition Labels 

Speaking of Google and tracking, the saga with Google’s iOS apps and their lack of privacy nutrition labels continues. Remember that (a) Google told TechCrunch back on January 5 they expected to add the privacy labels “this week or the next week”, and (b) because they haven’t added the labels, none of these popular apps have been updated since December. This includes Google Maps, Google Photos, the main Google search app, and Google Chrome. If you look at the version histories for these apps, until January, they were all generally updated at least once per month, and often several times per month.

YouTube, Google Home, and Google Drive, on the other hand, do have privacy nutrition labels. So whatever is going on here is not company-wide.

Correction: I originally had Gmail listed as one of Google’s apps that hadn’t been updated, but it was — just yesterday, after adding the privacy nutrition label a week ago. Google just seems to be adding these labels piecemeal, one at a time.

Google Claims It Will Replace Tracking With Privacy-Preserving Ads 

David Temkin, director of product management for ads privacy and trust, writing on the Google Blog:

Today, we’re making explicit that once third-party cookies are phased out, we will not build alternate identifiers to track individuals as they browse across the web, nor will we use them in our products.

We realize this means other providers may offer a level of user identity for ad tracking across the web that we will not — like PII graphs based on people’s email addresses. We don’t believe these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions, and therefore aren’t a sustainable long term investment. Instead, our web products will be powered by privacy-preserving APIs which prevent individual tracking while still delivering results for advertisers and publishers.

Honestly, I read this post twice and I don’t really know what it means. It sounds good on the surface, but cynically, it also sounds like an obfuscated way of saying that Google has figured out a way to continue tracking users but doesn’t think that counts as “tracking” because it’s all “first party” on Google properties:

We will continue to support first-party relationships on our ad platforms for partners, in which they have direct connections with their own customers. And we’ll deepen our support for solutions that build on these direct relationships between consumers and the brands and publishers they engage with.

The WSJ is taking Google at its word, with this lede:

Google plans to stop selling ads based on individuals’ browsing across multiple websites, a change that could hasten upheaval in the digital advertising industry.


Adoption Is Low for COVID-19 Exposure Apps, Rendering Them Effectively Useless

Rob Pegoraro, reporting for USA Today last week:

Fewer than half of U.S. states offer Android and iOS tools for the “exposure notification” system the two companies announced last April, which estimate other people’s proximity via anonymous Bluetooth beacons sent from phones with the same software.

Most people in participating states have yet to activate these apps. Those who do opt in and then test positive for the coronavirus that causes COVID-19 must opt in again by entering a doctor-provided verification code into their apps.

That second voluntary step generates anonymous warnings to other app users who got close enough to the positive user for long enough — again, as approximated from Bluetooth signals, not pinned down via GPS — to risk infection and to need a COVID-19 test.

So if your copy of one of these apps has remained silent, you’re not alone.

“Nobody in my circle has gotten the phone alert,” said Jeffrey Kahn, director of the Johns Hopkins Berman Institute of Bioethics in Baltimore and editor of a 2020 book on the ethics of digital contact tracing.

I’ve been curious about this for a while, so I asked on Twitter whether any of my followers had gotten notifications through this system. A few have! But I think the whole idea is fundamentally flawed. Even putting aside the fact that fewer than half of U.S. states offer the apps — a big issue to put aside — the only people who are using them are people who are conscientious about COVID exposure in the first place.

New Jersey has a population of about 9 million people. As of today, there have been about 800,000 cumulative reported cases of COVID-19 in the state. 600,000 users have used their app since it was launched. Via information displayed in the app itself, the total number of users who’ve uploaded their randomized/anonymized IDs after testing positive? 1,046. The total number of users who’ve been sent an exposure alert notification? 1,894. (My home state of Pennsylvania uses the same “COVID Alert” base app as New Jersey, but doesn’t seem to publish any numbers regarding usage.)

The whole endeavor seems pointless, looking at these numbers. If anything, it might be giving the users of these apps a false sense of security. If you use one of these apps and are exposed to someone who later tests positive, the odds that that person both uses the app and will report their positive test result seem not just low but downright infinitesimal. 


Amazon Tweaks Their New iOS App Icon 

I know, opinions about app icons are like assholes — everyone has one and they generally stink. But Amazon’s previous iOS app icon was, objectively, terrible. For one thing, the only thing about it that branded it as Amazon’s was the word “Amazon”. When your icon is your name, you’ve probably got a problem. But the other problem was the shopping cart. The whole point of Amazon being an online store is that you don’t need a shopping cart. They’ve been stretching this metaphor for over two decades but it’s not a good one.

I love the idea of using a cardboard box as the icon. That’s the iconic real-world object we all associate with Amazon. Sure sometimes you’re just getting something boring like toothpaste or deodorant. But sometimes you get something great — like a new book you preordered a few months back and sort of forgot about. Sometimes a box from Amazon is fun. So hell yes, make the app icon a fun cardboard box.

My problem with the new icon isn’t that the tape looked like a Hitler mustache. (They could’ve solved that by just putting tape on both ends of the box — boxes need tape on the bottom too.) It’s that the ethos of utterly flat design robs the concept of fun. Look at how much better the MacOS standard installer package icon looks than Amazon’s new icon. Just for a boring installer. Amazon is doing the right thing by today’s design trend — it’s the trend that’s wrong, and designers need to start asserting otherwise.

In the land of the blind, the one-eyed man is king; in the land of militantly flat design, a little bit of depth will spark joy.

Apple Disabled Dustin Curtis’s iCloud, App Store, and Apple ID Accounts Over Rejected Chargeback 

Dustin Curtis:

The next time I tried to use my Apple Card, it was declined. Strange. I checked the Wallet app, and the balance was below the limit. I remembered the Apple support representative mumbling about Apple Card, so I did some digging through my email to see if I could find a connection.

As it turns out, my bank account number changed in January, causing Apple Card autopay to fail. Then the Apple Store made a charge on the card. Less than fifteen days after that, my App Store, iCloud, Apple Music, and Apple ID accounts had all been disabled by Apple Card.

We all make bets on these ecosystems. Even if you host your own email at your own domain name (to name just one service) you’re probably not running the actual server. And even if you are running the actual server hosting your email, you’re still placing a bet on the service provider / data center hosting the server.

I’ve got a lot of my digital life bet on iCloud in this way. It doesn’t seem like there should even be a path on Apple’s side of things from “you missed a payment on your Apple Card” to “we’re locking you out of your Apple ID”. Apple shutting your Apple ID off shuts you off from a lot.

Update: Apple statement on what actually happened and under what circumstances they’ll lock your Apple ID if you owe them money.

Weather Line Acquired by Mystery Buyer 

Weather Line:

In recent months, we were approached by a buyer. They saw the uniqueness of Weather Line and the strong foundation we’ve built. While we aren’t able to provide further details on their future plans for the app, we hope you can understand, and will look forward to it.

The acquisition means the app is going away. Today, we removed Weather Line from the App Store. For all existing Weather Line users, free and paid, the app will continue working for 13 months, until April 1, 2022.

Weather Line has been consistently excellent, and has been one of my very favorite apps since it debuted. A great app that always stayed at the forefront of iOS design and forged a distinct identity with an infographic-focused design.

All good things must come to an end, but it feels particularly sad with Weather Line. Of all weather apps I’ve used — and I’ve used a lot — Weather Line is the best suited to iOS 14 widgets. Weather Line’s presentation has been widget-like since before there were widgets.

I’ll enjoy it while I can.

Instabug 

My thanks to Instabug for sponsoring last week at DF. Investigate, diagnose, and resolve issues up to 4 times faster with Instabug’s latest Application Performance Monitoring.

Instabug SDK provides you the same level of profiling you get in Xcode Instruments from your live users, with a lightweight SDK and minimal footprint. Whether it’s a crash, slow screen transitions, slow network call, or UI hangs, utilize performance patterns to fix issues faster and spot trends and spikes.

Find out what your app is missing and join the top mobile teams like Verizon, Ventee Privee, and Lyft relying on Instabug for app quality.

The Talk Show: ‘Pinkies on the Semicolon’ 

The state of the Mac, with special guest John Siracusa.

Sponsored by:

  • Mack Weldon: Reinventing men’s basics with smart design, premium fabrics, and simple shopping. Get 20% off your first order with code talkshow.
  • Hover: Find a domain name for your passion. Get 10% off your first purchase.
  • Squarespace: Make your next move. Use code talkshow for 10% off your first order.
  • Flatfile: Spend less time formatting spreadsheet data, and more time using it.
Brazilian Rainforest Plots Are Being Sold Illegally via Facebook Marketplace Ads 

Joao Fellet and Charlotte Pamment, reporting for BBC News:

Parts of Brazil’s Amazon rainforest are being illegally sold on Facebook, the BBC has discovered. The protected areas include national forests and land reserved for indigenous peoples. Some of the plots listed via Facebook’s classified ads service are as large as 1,000 football pitches.

Facebook said it was “ready to work with local authorities”, but indicated it would not take independent action of its own to halt the trade.

Just in case you hadn’t been angered by Facebook this week.

MailTrackerBlocker for Apple Mail on MacOS 

Open source plugin for Apple Mail on MacOS, by Aaron Lee:

MailTrackerBlocker is a plugin (mailbundle) for the default Mail app built-in to macOS. Email marketers and other interests often embed these trackers in HTML emails so they can track how often, when and where you open your emails. This plugin works by stripping out a good majority of these spy pixels out of the HTML before display, rendering the typical advice of disabling “load remote content in messages” unnecessary.

Browse your inbox privately with images displayed once again.

There’s a simple installer to download, and the project’s GitHub page has instructions for installing via HomeBrew. I’ve been running it since Wednesday, and it seems to do just what it says on the tin — it blocks many (most?) marketing and newsletter trackers without requiring you to turn off all remote images. When it does block something, there’s a very subtle indication — the small “ⓧ” button turns blue. Click that button and you get an alert telling you what it blocked. Simple and unobtrusive.

MailTrackerBlocker is a cool project Lee has made available for free, but he has a sponsor page where you can send some dough to thank him. (I sent him a one-time donation via PayPal — you should too if you dig this as much as I do.)

Spoonbill 

Speaking of Justin Duke, in addition to Buttondown, he also created and runs Spoonbill, a nifty free service that lets you track changes to the bios of the people you follow on Twitter:

How it works.

  1. First, you sign up. (Duh.)

  2. Then we look at all the folks you’re following on Twitter.

  3. We check every couple minutes to see if they’ve changed their profile information.

  4. If they have, we record it!

  5. Then, every morning (or every week), we send you an email with all the changes.

Daily was too much for me, perhaps because I follow too many accounts on Twitter, but once a week is perfect. And you can subscribe via RSS instead of email — this is a very natural service for RSS.

Mailcoach: Another Self-Hosted Newsletter Service 

“Mailcoach is a self-hosted email marketing platform that integrates with services like Amazon SES, Mailgun, Postmark or Sendgrid to send out bulk mailings affordably.”

Mailcoach lets you disable tracking with a checkbox, and the next version will have tracking off by default.

Sendy: Self-Hosted Newsletter Service Built Atop Amazon SES 

Sendy is an interesting newsletter service recommended by a longtime DF reader:

Sendy is a self hosted email newsletter application that lets you send trackable emails via Amazon Simple Email Service (SES). This makes it possible for you to send authenticated bulk emails at an insanely low price without sacrificing deliverability.

You need to host the PHP application yourself (more or less like self-hosting, say, WordPress), but the emails go out via Amazon’s service. Sendy makes it easy to disable tracking pixels, and, even if you do track subscribers, the tracking information never involves any third parties, including Sendy. Just you.

Sendy’s big pitch isn’t privacy but cost: they claim to be 100-200 times cheaper than MailChimp or Campaign Monitor.

Buttondown: Newsletter Service That Allows Opting Out of Tracking 

It’s hard to find newsletter services that even allow you — the purveyor of the newsletter — not to track your subscribers. Buttondown — from Justin Duke — is one option, and it looks pretty sweet. (Markdown editing, for example.) From Buttondown’s privacy feature page:

Many busineses thrive the concept of collecting data about individuals based on their email addresses and inbox usage. (You can read about that here.) Buttondown is different. As a bootstrapped business, I don’t need to engage with data on level. Your information is yours, and yours alone.

Buttondown collects the standard bevy of email analytics: IP addresses, open and click events, client information. Buttondown sends that to absolutely nobody besides, well, you, the beloved customer. And if you want to completely opt out, you can.

By default, Buttondown seems just as privacy-intrusive as all the other newsletter providers:

Track Opens and Clicks — Per-email analytics mean you get an easy funnel of how many folks are engaging with your emails and what content they’re interested in.

Translated to plain English: “Spy tracking allows you to know when each of your subscribers opens and reads your newsletter, including the ability to creep on them individually.” Buttondown’s privacy “win” is that it at least allows you to turn tracking off with a simple checkbox. Most services don’t. I can’t find any hosted service that doesn’t offer tracking period, or even defaults to no tracking.

[Update: Justin Duke, on Twitter: “thanks for the buttondown mention! agreed that defaulting to opt out of tracking automatically is better: the current default wasn’t a deliberate choice so much as an artifact of the initial behavior’s implementation.” He’s changing the default to not use analytics, as of tonight. Nice!]

One message I’ve heard from folks who would know is that two of the reasons for the ubiquitous use of tracking pixels in newsletters are anti-spam tools (anti-anti-spam tools, really) and the expense of sending emails to people who never read them. Newsletters being flagged as spam — especially by major players like Gmail and Hotmail — is a never-ending game of whack-a-mole, and spy pixels help alert newsletter providers that their messages are being flagged. Expense-wise, those who send free newsletters want to cull from their lists people who never open them or click any of the links. Sending newsletters to thousands (let alone tens of thousands or more) of subscribers is, relatively speaking, expensive.

I’m sympathetic, but that’s a YP, not an MP, so fuck you and your tracking pixels. I’m blocking them and you should too.

But that’s why the world needs a company like Apple to take action. If Apple were to kneecap email tracking in Mail for Mac and iOS, the industry would have to adapt.

Twitter Teases Upcoming Features: Paid ‘Super Follows’ and Community Groups 

Jacob Kastrenakes, reporting for The Verge:

The payment feature, called Super Follows, will allow Twitter users to charge followers and give them access to extra content. That could be bonus tweets, access to a community group, subscription to a newsletter, or a badge indicating your support. In a mockup screenshot, Twitter showed an example where a user charges $4.99 per month to receive a series of perks. Twitter sees it as a way to let creators and publishers get paid directly by their fans.

Twitter also announced a new feature called Communities, which appear to be its take on something like Facebook Groups. People can create and join groups around specific interests — like cats or plants, Twitter suggests — allowing them to see more tweets focused on those topics. Groups have been a huge success for Facebook (and a huge moderation problem, too), and they could be a particularly helpful tool on Twitter, since the service’s open-ended nature can make it difficult for new users to get started on the platform.

Both these features sound great. Ben Thompson and I encouraged Twitter to do something like “Super Follows” a few weeks ago on Dithering. Almost certainly, though, all of this will only work in Twitter’s own client, not third-party apps like Tweetbot and Twitterrific.

Twitter hasn’t said how the economics will work — what cut of the money they’re going to take — but last month when they acquired paid-newsletter Substack rival Revue, they cut Revue’s take to just 5 percent. (Substack takes 10.)

‘Steve Jobs Stories’ on Clubhouse 

Computer History Museum:

Chris Fralic, Steven Levy, Esther Dyson, Mike Slade, John Sculley, Seth Godin, Andy Cunningham, Dan’l Lewin, Doug Menuez, Regis McKenna, Andy Hertzfeld, and Steven Rosenblatt share their “Steve Jobs Stories” in honor of what would have been the Apple cofounder’s 66th birthday.

I missed the first half of this show on Clubhouse, but caught the second half live. Easily the best event I’ve heard on Clubhouse. Good stories, well told. Nice job by the Computer History Museum getting this recorded and posted to YouTube for posterity.

El Toro ‘One-to-One IP Targeting’ 

“Ad tech” (read: spyware) company El Toro is just one company in an industry full of competitors, but their description of their capabilities struck me as particularly flagrant in its utter disregard for privacy:

As a marketing organization focused on sales not metrics, El Toro’s ad tech brings the location-specific accuracy of direct mail to digital advertising. Through our patented IP Targeting technology we target digital ads to your customer by matching their IP address with their physical address, bringing a wide variety of banner and display ads to the sites the targeted customer visits on the Internet.

Specifically, El Toro offers: Targeting without having to use cookies, census blocks, or geo-location tools.

They claim the ability not just to match your IP address to a general location, but to your exact home street address, and from there to specific devices within your home. Their pitch to would-be advertisers is that they can target you by IP address the same way marketers send all those print catalogs to your house. From their above-linked IP Targeting website:

The El Toro patented algoirthm [sic] uses 38+ points of data to match an IP to a household with 95% accuracy.

Do I believe they can match IPs to street addresses with 95 percent accuracy? No. I wouldn’t believe a word out of these guys’ mouths, to be honest. But the fact that they can do it with any degree of accuracy is a problem that needs to be solved.

Why doesn’t Apple build a VPN into its OSes? Or as an offering of paid iCloud accounts at least? At this point, if privacy truly is a paramount concern, it might be necessary to do everything over a trusted VPN. IP addresses are inherently not private.

From the DF Archive: Superhuman and Email Privacy 

Yours truly, back in July 2019:

They call them “read receipts”, and functionally they do work like read receipts, insofar as they indicate when you read a message. But real email read receipts are under the recipient’s control, and they’re a simple binary flag, read or unread  —  they don’t tell the sender how many times or when you view a message.

This post was about Superhuman in particular, but it applies to all email services using tracking pixels. Email has an official “read receipt” feature, a feature that is under the recipient’s control, as it should be. These spy pixels are a surreptitious circumvention.

I know that mailing list software generally includes tracking pixels. I don’t think that’s ethical either. On a personal level, though, with Superhuman, tracking when and how many times a recipient views a message is simply absurdly wrong.

It’s also something the vast, overwhelming majority of people don’t even realize is possible. I’ve told the basic Superhuman tracking story to a few people over the last few weeks, and asked whether they realized this was possible; all of them expressed shock and many of them outrage as well. Email should be private, and most people assume, incorrectly, that it is. You have to be a web developer of some sort to understand how this is possible. Email is supposed to be like paper mail  —  you send it, they get it, and you have no idea whether they read it or not. It bounces back to you if they never even receive it, say, because you addressed it incorrectly. The original conception of email is completely private.

But also, the original conception of email is that messages are plain text. No fonts, no styles, just plain text, with optional attachments. But those attachments are embedded in the message, not pulled from a server when the message is viewed.

Once we allowed email clients to act as de facto web browsers, loading remote content from servers when messages are viewed, we opened up not just a can of worms but an entire case of canned worms. Every privacy exploit for a web browser is now a privacy exploit for email. But it’s worse, because people naturally assume that email is completely private.

It’s a little depressing re-reading this piece today. Everything I’m arguing today, I argued then. Email privacy in the face of these trackers remains an industry-wide disgrace.


Apple Mail and Hidden Tracking Images

In my piece yesterday about email tracking images (“spy pixels” or “spy trackers”), I complained about the fact that Apple — a company that rightfully prides itself for its numerous features protecting user privacy — offers no built-in defenses for email tracking.

A slew of readers wrote to argue that Apple Mail does offer such a feature: the option not to load any remote resources at all. It’s a setting for Mail on both Mac and iOS, and I know about it — I’ve had it enabled for years. But this is a throwing-the-baby-out-with-bath-water approach. What Hey offers — by default — is the ability to load regular images automatically, so your messages look “right”, but block all known images from tracking sources (which are generally 1×1 px invisible GIFs).

Typical users are never going to enable Mail’s option not to load remote content. It renders nearly all marketing messages and newsletters as weird-looking at best, unreadable at worst. And when you get a message whose images you do want to see, when you tell Mail to load them, it loads all of them — including trackers. Apple Mail has no knowledge of spy trackers at all, just an all-or-nothing ability to turn off all remote images and load them manually.

Mail’s “Load remote content in messages” option is a great solution to bandwidth problems — remember to turn it on the next time you’re using Wi-Fi on an airplane, for example. It’s a terrible solution to tracking. No one would call it a good solution to tracking if Safari’s only defense were an option not to load any images at all until you manually click a button in each tab to load them all. But that’s exactly what Apple offers with Mail. (Safari doesn’t block tracking images, but Safari does support content blocking extensions that do — one solution for Mail would be to enable the same content blocker extensions in Mail that are enabled in Safari.)

How does Hey know which images are trackers and which are “regular” images? They can’t know with absolute certainty. But they’ve worked hard on this feature, and have an entire web page promoting it. From that page:

HEY manages this protection through several layers of defenses. First, we’ve identified all the major spy-pixel patterns, so we can strip those out directly. When we find one of those pesky pixels, we’ll tell you exactly who put it in there, and from what email application it came. Second, we bulk strip everything that even smells like a spy pixel. That includes 1x1 images, trackers hidden in code, and everything else we can do to protect you. Between those two practices, we’re confident we’ll catch 98% of all the tracking that’s happening out there.

But even if a spy pixel sneaks through our defenses (and we vow to keep them updated all the time!), you’ll have an effective last line of defense: HEY routes all images through our own servers first, so your IP address never leaks. This prevents anyone from discovering your physical location just by opening an email. Like VPN, but for email.

Apple should do something similar: identify and block spy trackers in email by default, and route all other images through an anonymizing proxy service.1 And, like Hey, they should flag all emails containing known trackers with a shame badge. It’s a disgraceful practice that has grown to be accepted industry-wide as standard procedure, because the vast majority of users have no idea it’s even going on. Through reverse IP address geolocation, newsletter and marketing email services track not just that you opened their messages, but when you opened them, and where you were (to the extent that your IP address reveals your location).

No thanks. Apple should offer defenses against email tracking just as robust as Safari’s defenses against web tracking.2 


  1. Gmail has been proxying remote images in messages since 2013↩︎

  2. Don’t get me started on how predictable this entire privacy disaster was, once we lost the war over whether email messages should be plain text only or could contain embedded HTML. Effectively all email clients are web browsers now, yet don’t have any of the privacy protection features actual browsers do. ↩︎︎


The Apple Store App Has an Easter Egg 

Search for “10 years” and you get a fun animation. Any others?

Updates:

The Hidden Message in the Parachute of NASA’s Mars Rover 

Joey Roulette, writing for The Verge:

The parachute that helped NASA’s Perseverance rover land on Mars last week unfurled to reveal a seemingly random pattern of colors in video clips of the rover’s landing. But there was more to the story: NASA officials later said it contained a hidden message written in binary computer code.

Internet sleuths cracked the message within hours. The red and white pattern spelled out “Dare Mighty Things” in concentric rings. The saying is the Perseverance team’s motto, and it is also emblazoned on the walls of Mission Control at NASA’s Jet Propulsion Laboratory (JPL), the mission team’s Southern California headquarters.

The parachute’s outer ring appears to translate to coordinates for JPL: 34°11′58″ N 118°10′31″ W.

Tonya Fish posted a handy guide on Twitter (also available as a PDF) explaining how the code works. (Via Kottke.)

Seems sad to me that NASA and JPL are willing to have some fun with clever Easter eggs with a Mars rover, yet Apple, of all companies, no longer does any Easter eggs at all. Computers are supposed to be fun.

BBC News: ‘Spy Pixels in Emails Have Become Endemic’ 

Speaking of Hey, BBC News ran a piece on email spy pixels last week:

The use of “invisible” tracking tech in emails is now “endemic”, according to a messaging service that analysed its traffic at the BBC’s request. Hey’s review indicated that two-thirds of emails sent to its users’ personal accounts contained a “spy pixel”, even after excluding for spam. […]

Defenders of the trackers say they are a commonplace marketing tactic. And several of the companies involved noted their use of such tech was mentioned within their wider privacy policies.

“It’s in our privacy policy” is nonsense when it comes to email spy pixels. It’s nonsense for most privacy policies, period, because most privacy policies are so deliberately long, opaque, and abstruse as to be unintelligible. But with email they’re absurd. The recipient of an email containing a tracking pixel never agreed to any privacy policy from the sender.

And “it’s a commonplace marketing tactic” is not a defense. It’s an excuse, but it’s a shitty one. It just shows how out of control the entire tracking industry is. Their justification for all of it is, effectively, “It’s pervasive so it must be OK.” That’s like saying back in the 1960s that most people smoke so it must be safe. Or that most people don’t wear seat belts so that must be safe.

Emails pixels can be used to log:

  • if and when an email is opened
  • how many times it is opened
  • what device or devices are involved
  • the user’s rough physical location, deduced from their internet protocol (IP) address - in some cases making it possible to see the street the recipient is on

Hey’s default blocking of spy pixels — along with displaying a prominent badge shaming the sender for using them — is one of its best features. Apple should take a long hard look at Mail and the way that it does nothing to protect users’ privacy from these trackers. They’re insidious and offensive.

‘Hey, World!’ 

Jason Fried, on an experimental blogging service Basecamp has built into their email service Hey:

So we set out to do it. To test the theory. And over the last few weeks we built it into HEY, our new email service. We’re calling the feature HEY World. This post you’re reading right now is the world’s first HEY World post. And I published it by simply emailing this text directly to [email protected] from my [email protected] account. That was it.

For now, this remains an experiment. I’ve got my own HEY World blog, and David has his. We’re going to play for a while. And, if there’s demand, we’ll roll this out to anyone with a personal @hey.com account. It feels like Web 1.0 again in all the right ways. And it’s about time.

Speaking of Web 1.0, HEY World pages are lighting fast. No javascript, no tracking, no junk. They’re a shoutout to simpler times. Respect.

You can subscribe to a Hey World blog via email (of course) or RSS. Feels like simple stuff — like RSS — is experiencing a renaissance.

‘Hello, World’ 

MIT’s Computer Science & Artificial Intelligence Lab:

Today’s the day that “hello world” said “hello world!”

The term was coined in a textbook published #otd in 1978: “C Programming Language,” written by Brian Kernighan and Dennis Ritchie.

Tweeted yesterday, so it’s no longer “on this day”, sorry, but interesting history nonetheless.

I still write “Hello, world” as a first exercise in any new language or programming environment. Not a superstition per se, but more like a talisman. Just seems like the right thing to do.

The C Programming Language is a wonderfully-written book. It explains the basics of C better than anything I’ve ever seen. C is a weird, hard language but K&R describe it with joy. It’s a serious book written in a conversational style.

‘I’m Being Censored, and You Can Read, Hear, and See Me Talk About It in the News, on the Radio, and on TV’ 

Eli Grober, writing for McSweeney’s:

Hi there, thanks for reading this. I’m being censored. That’s why I’m writing a piece in a major publication that you are consuming easily and for free. Because I am being absolutely and completely muzzled.

Also, I just went on a massively-watched TV show to let you know that my voice is being down-right suffocated. I basically can’t talk to anyone. Which is why I’m talking to all of you.

As Jeanetta Grace Susan has convincingly argued, conservative voices are being silenced.

500,000 Lives Lost 

Staggering, sobering data visualization from Reuters.

Mux Video 

My thanks to Mux for once again sponsoring DF last week. Mux Video is an API to powerful video streaming — think of it as akin to Stripe for video — built by the founders of Zencoder and creators of Video.js, and a team of ex-YouTube and Twitch engineers. Take any video file or live stream and make it play beautifully at scale on any device, powered by magical-feeling features like automatic thumbnails, animated GIFs, and data-driven encoding decisions.

Spend your time building what people want, not drudging through ffmpeg documentation.


How ‘Unlock With Apple Watch’ While Wearing a Face Mask Works in iOS 14.5

I don’t generally write about features in beta versions of iOS. In fact, I don’t generally install beta versions of iOS, at least on my main iPhone. But the new “Unlock With Apple Watch” feature, which kicks in when you’re wearing a face mask, was too tempting to resist.

First things first: to use this feature, you need to install iOS 14.5 on your iPhone and WatchOS 7.4 on your Apple Watch (both of which are, at this writing, on their second developer betas). So far, for me, these OS releases have been utterly reliable. Your mileage may vary, and running a beta OS on your daily-carry devices is always at your own risk. But I think the later we go in OS release cycles, the more stable the betas tend to be. Over the summer, between WWDC and the September (or October) new iPhone event, iOS releases can be buggy as hell. The x.1 releases are usually the stable ones, and the releases after that tend to be very stable in beta — Apple uses these releases to fix bugs and to add new features that are stable. If anything, I think iOS 14.5 is very stable technically, and only volatile politically, with the new opt-in requirement for targeted ad user tracking.

After using this feature for a few weeks now, I can’t see going back. As the designated errand runner in our quarantined family, it’s a game changer. Prior to iOS 14.5, using a Face ID iPhone while wearing a face mask sucked. Every single time you unlocked your phone, you needed to enter the passcode/passphrase. The longer your passcode, the more secure it is (of course), but the more annoying it is to enter incessantly.

“Unlock With Apple Watch” eliminates almost all of that annoyance. It’s that good. It’s optional (as it should be), and off by default (also as it should be, for reasons explained below). It’s easy to turn on in Settings on your iPhone: go to Face ID & Passcode, enter your passcode, and scroll down to the “Unlock With Apple Watch” section, where you’ll find toggles for each Apple Watch (running WatchOS 7.4 or later) paired with your iPhone.

Here is how the feature seems to work.

  1. Does Face ID work normally? I.e. is the face in front of the phone you, the owner, and are you not wearing a mask? If so, unlock normally. Normal non-mask Face ID is unchanged when this feature is enabled.

  2. If Face ID fails, is there a face wearing a mask in front of the phone? If so, is an authorized Apple Watch in a secure state (i.e. the watch itself is unlocked and on your wrist) and very close to the iPhone? If so, unlock, and send a notification to the watch stating that the watch was just used to unlock this iPhone. The notification sent to the watch includes a button to immediately lock the iPhone.

Because it’s a two-step process (step #1 first, then step #2), it does take a bit longer than Face ID without a mask (which is really just step #1). But it works more than fast enough to be a pleasant convenience experience. Regular Face ID is so fast you forget it’s even there; “Unlock With Apple Watch” is slow enough that you notice it’s there, but fast enough that it isn’t a bother.

It’s important to note that in step #2, it works with any face wearing a mask. It’s not trying to do a half-face check that your eyes and forehead look like you, or anything like that. My iPhone will unlock if my wife or son is the face in front of my iPhone — but only if they’re wearing a mask, and only if my Apple Watch is very close to the phone. I’d say less than 1 meter — pretty much about what you would think the maximum distance would be between a watch on one wrist and an iPhone in the other hand.

When this feature kicks in, you always get a wrist notification telling you it happened, with just one button: “Lock iPhone”. If you tap this button, the iPhone is immediately hard-locked and requires your passcode to be re-entered even if you take your mask off. (It’s the same hard-locked mode you can put your iPhone into manually by pressing and holding the power button and one of the volume buttons — a good tip to remember when going through a security checkpoint or any other potential encounter with law enforcement.)

I’m not sure if anyone will be annoyed by this mandatory wrist notification, but they shouldn’t be, and it shouldn’t be optional. You want this notification every time to prevent anyone from surreptitiously unlocking your iPhone near you, just by putting a face mask on.

Also, if your Apple Watch is in Sleep mode (the bed icon in WatchOS’s Control Center), the feature does not work.

It’s occasionally slow. And two or three times, I got a message on my iPhone that my watch was too far away for the feature to work, even though I raised my watch-wearing wrist next to the phone. These hiccups were rare, and to my recollection, I only ran into them with iOS 14.5 beta 1, not beta 2.

Even in the worst case scenario, where the feature doesn’t work, you’re no worse off than you were before the feature existed: you simply have to manually enter your phone’s passcode.

Last but not least, the “Unlock With Apple Watch” feature very specifically seems to be looking for a face wearing a face mask. The feature does not kick in if Face ID fails for any other reason — like, say, if you’re wearing sunglasses with lenses that Face ID can’t see through. (I wish they’d make this work with sunglasses, too.)

Addenda

Throwing Shade: There seems to be some confusion over what I’m asking for w/r/t sunglasses. Face ID has always supported an option to turn off “Require Attention for Face ID”. When off, Face ID will work even if it doesn’t detect your eyes looking at the screen. (It’s an essential accessibility feature for people with certain vision problems.) If you own sunglasses that the iPhone’s TrueDepth camera system can’t “see” through, you can disable “Require Attention for Face ID” to allow Face ID to work while you’re wearing your shades.

This is far from ideal though, because it weakens Face ID all the time, not just when you’re wearing sunglasses. What’s nice about the new “Unlock With Apple Watch” feature is that it only applies when you’re wearing a mask and your Apple Watch. What I’m saying I’d like to see Apple support is an extension of “Unlock With Apple Watch” that would do the same thing for sunglasses that it currently does for face masks. I’ve heard from readers who have trouble with Face ID when wearing their motorcycle helmets, too, and I’m sure there are other examples. Basically, I’d like to see Apple add the option of trusting your Apple Watch to unlock your iPhone in more scenarios where your face can’t be recognized. My request is very different from, and more secure than, the existing “Require Attention” feature.

(Speaking of which, while wearing a mask, “Unlock With Apple Watch” does not check for whether your eyes are looking at the display, regardless of your setting for “Require Attention for Face ID”. Again, this makes sense, because it’s not Face ID — “Unlock With Apple Watch” is an alternative authentication method that kicks in after Face ID has failed.)

Apple Pay: I didn’t mention the fact that “Unlock With Apple Watch” does not work with Apple Pay. This makes sense, because however secure “Unlock With Apple Watch” is (and I think it’s quite secure), it’s not as secure as Face ID authenticating your actual face. For payments, you obviously want the highest level of secure authentication.

Also, for Apple Pay, if you’re wearing your Apple Watch (a requirement for “Unlock With Apple Watch”), you can just use your Apple Watch for Apple Pay.

It also doesn’t work with apps that use Face ID for authentication within them. Banking apps, for example, or unlocking locked notes in Apple Notes. But this makes sense too — the feature is specifically called “Unlock With Apple Watch”. It unlocks your phone, that’s it. Anything else that requires Face ID for secure authentication still requires Face ID. 


The Talk Show: ‘Peak Hubris’ 

Christina Warren returns to the show to talk about Apple Car, Apple TV, Clubhouse, and Bloomberg hamfistedly revisiting “The Big Hack”.

Sponsored by:

  • Squarespace: Make your next move. Use code talkshow for 10% off your first order.
  • Linode: Instantly deploy and manage an SSD server in the Linode Cloud. New accounts can get $100 credit.
  • Flatfile: Spend less time formatting spreadsheet data, and more time using it.
Tim Berners-Lee Worries Australian Law Could Make the Web ‘Unworkable’ 

Anthony Cuthbertson, reporting for The Independent:

“Specifically, I am concerned that that code risks breaching a fundamental principle of the web by requiring payment for linking between certain content online,” Berners-Lee told a Senate committee scrutinizing a bill that would create the New Media Bargaining Code.

If the code is deployed globally, it could “make the web unworkable around the world”, he said.

It’s a question dividing proponents and critics of the proposed Australian law: does it effectively make Google and Facebook “pay for clicks” and might it be the beginning of the end of free access?

I don’t know what this Berners-Lee guy knows about the web, but I agree.

Rich Mogull on Apple’s Updated 2021 Platform Security Guide 

Rich Mogull, writing at TidBits, on Apple’s 2021 Platform Security Guide:

As wonderful as the Apple Platform Security guide is as a resource, writing about it is about as easy as writing a hot take on the latest updates to the dictionary. Sure, the guide has numerous updates and lots of new content, but the real story isn’t in the details, but in the larger directions of Apple’s security program, how that impacts Apple’s customers, and what it means to the technology industry at large.

From that broader perspective, the writing is on the wall. The future of cybersecurity is vertical integration. By vertical integration, I mean the combination of hardware, software, and cloud-based services to build a comprehensive ecosystem. Vertical integration for increased security isn’t merely a trend at Apple, it’s one we see in wide swaths of the industry, including such key players as Amazon Web Services. When security really matters, it’s hard to compete if you don’t have complete control of the stack: hardware, software, and services.

Apple Cracks Down on Apps With ‘Irrationally High Prices’ as App Store Scams Are Exposed 

Guilherme Rambo, writing for 9to5Mac:

App Store scams have recently resurfaced as a developer exposed several scam apps in the App Store making millions of dollars per year. Most of these apps exploit fake ratings and reviews to show up in search results and look legit, but trick users into getting subscriptions at irrationally high prices.

It looks like Apple has started to crack down on scam attempts by rejecting apps that look like they have subscriptions or other in-app purchases with prices that don’t seem reasonable to the App Review team.

From the rejection letter sent by the App Store review team:

Customers expect the App Store to be a safe and trusted marketplace for purchasing digital goods. Apps should never betray this trust by attempting to rip-off or cheat users in any way.

Unfortunately, the prices you’ve selected for your app or in-app purchase products in your app do not reflect the value of the features and content offered to the user. Charging irrationally high prices for content or services with limited value is a rip-off to customers and is not appropriate for the App Store.

Specifically, the prices for the following items are irrationally high:

This is exactly the sort of crackdown I’ve been advocating for years. A bunco squad that looks for scams, starting with apps that (a) have high-priced in-app purchases and subscriptions, and (b) are generating a lot of money. Ideally Apple will crack down on all scams, but practically speaking, all that matters is that they identify and eliminate successful scams — and identify the scammers behind them and keep them out of the store.

Developer Kosta Eleftheriou has been righteously leading a sort of indie bunco squad for a few weeks, identifying a slew of scams (usually involving apps with clearly fraudulent ratings, too).

Nomination for Lede of the Year 

Ashley Parker, reporting for The Washington Post:

Usually, it takes at least one full day in Cancun to do something embarrassing you’ll never live down.

But for Ted Cruz (R-Tex.), it took just 10 hours — from when his United plane touched down at Cancun International Airport at 7:52 p.m. Wednesday to when he booked a return flight back to Houston around 6 a.m. Thursday — for the state’s junior senator to apparently realize he had made a horrible mistake.

Give Cruz credit for this: he’s brought the whole nation together in unity.

Pfizer’s Vaccine Works Well With One Dose 

The New York Times:

A study in Israel showed that the vaccine is robustly effective after the first shot, echoing what other research has shown for the AstraZeneca vaccine and raising the possibility that regulators in some countries could authorize delaying a second dose instead of giving both on the strict schedule of three weeks apart as tested in clinical trials. […]

Published in The Lancet on Thursday and drawing from a group of 9,100 Israeli health care workers, the study showed that Pfizer’s vaccine was 85 percent effective 15 to 28 days after receiving the first dose. Pfizer and BioNTech’s late-stage clinical trials, which enrolled 44,000 people, showed that the vaccine was 95 percent effective if two doses were given three weeks apart. […]

Pfizer and BioNTech also announced on Friday that their vaccine can be stored at standard freezer temperatures for up to two weeks, potentially expanding the number of smaller pharmacies and doctors’ offices that could administer the vaccine, which now must be stored at ultracold temperatures.

The U.S. needs to change its policy and get more shots into more arms as quickly as possible. Administer the second booster shots in the summer after a majority of Americans have gotten their first. The current policy is simply wrong, given the data, and is halving the rate at which we can achieve herd immunity.

Tucker Carlson Detects Other Suspicious Behaviors 

If we were to debate which newspaper is better, The New York Times or Washington Post, Alexandra Petri would be one of my top arguments in favor of the Post.

Bruce Blackburn, Designer of Ubiquitous NASA Logo, Dies at 82 

A bit of sad NASA-related news today, too:

Bruce Blackburn, a graphic designer whose modern and minimalist logos became ingrained in the nation’s consciousness, including the four bold red letters for NASA known as the “worm” and the 1976 American Revolution Bicentennial star, died on Feb. 1 in Arvada, Colo., near Denver. He was 82. […]

In a design career of more than 40 years, Mr. Blackburn developed brand imagery for clients like IBM, Mobil and the Museum of Modern Art. But he is best known for the NASA worm, which has become synonymous with space exploration and the concept of the technological future itself.

I’m glad he lived long enough to see NASA re-embrace his wonderful logo. It’s such a perfect mark — one that will always feel like a symbol of the future.

Update: NASA’s 1976 “Graphics Standards Manual” — 60-page document on how to use the logo. This is how you do it.

NASA’s Perseverance Rover Lands on Mars 

Kenneth Chang, reporting for The New York Times:

NASA safely landed a new robotic rover on Mars on Thursday, beginning its most ambitious effort in decades to directly study whether there was ever life on the now barren red planet.

While the agency has completed other missions to Mars, the $2.7 billion robotic explorer, named Perseverance, carries scientific tools that will bring advanced capabilities to the search for life beyond Earth. The rover, about the size of a car, can use its sophisticated cameras, lasers that can analyze the chemical makeup of Martian rocks and ground-penetrating radar to identify the chemical signatures of fossilized microbial life that may have thrived on Mars when it was a planet full of flowing water.

Great landing, and a great day for science.

More here, from NASA’s own website.