By John Gruber
CoverSutra Is Back from the Dead — Your Music Sidekick, Right in the Menu Bar
Joe Rossignol, MacRumors:
Apple plans to stop selling the iPhone 14, iPhone 14 Plus, and third-generation iPhone SE in European Union countries later this month, to comply with a regulation that will soon require newly-sold smartphones with wired charging to be equipped with a USB-C port in those countries, according to French blog iGeneration. All three of these iPhone models are still equipped with a Lightning port for wired charging.
In a paywalled report today, the website said the iPhone models will no longer be sold through Apple’s online store and retail stores in the European Union as of December 28, which is when the regulation goes into force.
It was never clear to me whether this regulation only applied to new devices, or to existing ones. But I guess it applies to existing ones. Until the expected next-gen iPhone SE ships early next year, the lowest-priced new iPhone in the EU will be the iPhone 15, which starts at $700 in the U.S. and around €860 in Europe. (Apple’s prices vary slightly between EU countries.)
Nifty new convert-to-Markdown library from a small indie development shop named Microsoft:
The MarkItDown library is a utility tool for converting various files to Markdown (e.g., for indexing, text analysis, etc.)
It presently supports:
- PDF (.pdf)
- PowerPoint (.pptx)
- Word (.docx)
- Excel (.xlsx)
- Images (EXIF metadata, and OCR)
- Audio (EXIF metadata, and speech transcription)
- HTML (special handling of Wikipedia, etc.)
- Various other text-based formats (csv, json, xml, etc.)
The API is simple:
from markitdown import MarkItDown markitdown = MarkItDown() result = markitdown.convert("test.xlsx") print(result.text_content)
Via Stephan Ango (CEO of the excellent, popular Markdown writing and note-taking app Obsidian), who also points out that Google Docs added Markdown export a few months ago. I’ve never used Google Docs other than to read documents created by others, but MarkItDown seems like a library I might make great use of. “MarkItDown” is even a great name. What a world.
Not bad for a 20-year-old syntax.
Ryan Christoffel, also at 9to5Mac:
There are two key features that are part of iOS 18.2, but aren’t yet ready for the Mac:
Genmoji are an especially unfortunate omission, as they’re available on both iPhone and iPad with iOS and iPadOS 18.2. Meanwhile the Mail app redesign is currently iPhone-exclusive, so it’s missing from both the Mac and iPad in these next software updates.
The omission of Genmoji creation in MacOS 15.2, and the omission of the new AI inbox categorization features in Mail on both iPad and Mac, aren’t surprises — they weren’t in any of the betas for these .2 OS updates. But it is a weirdly glaring omission. Apple itself started promoting screenshots of the Apple Intelligence inbox categorization in Mail for Mac back in October, when the .1 OS updates shipped with the initial round of Apple Intelligence features.
I am reliably informed that the new Mail categorization features are coming soon to iPad and Mac, which I suspect means in the .3 updates. But the first .3 betas aren’t out yet.
Chance Miller has a good rundown for 9to5Mac:
The update includes major new Apple Intelligence features, upgrades to the Camera Control on iPhone 16, a redesign for the Mail app, and much more.
The new Apple Intelligence features lead the list, and certainly lead Apple’s marketing, but there’s quite a bit else new in 18.2 too.
Federico Viticci, writing at MacStories “Apple Intelligence in iOS 18.2: A Deep Dive into Working with Siri and ChatGPT, Together”:
In testing the updated Writing Tools with ChatGPT integration, I’ve run into some limitations that I will cover below, but I also had two very positive experiences with the Notes app that I want to mention here since they should give you an idea of what’s possible.
In my first test, I was working with a note that contained a list of payments for my work at MacStories and Relay FM, plus the amount of taxes I was setting aside each month. The note originated in Obsidian, and after I pasted it into Apple Notes, it lost all its formatting.
There were no proper section headings, the formatting was inconsistent between paragraphs, and the monetary amounts had been entered with different currency symbols for EUR. I wanted to make the note look prettier with consistent formatting, so I opened the “Compose” field of Writing Tools and sent ChatGPT the following request:
This is a document that describes payments I sent to myself each month from two sources: Relay FM and MacStories. The currency is always EUR. When I mention “set aside”, it means I set aside a percentage of those combined payments for tax purposes. Can you reformat this note in a way that makes more sense?
I hit Return, and after a few seconds, ChatGPT reworked my text with a consistent structure organized into sections with bullet points and proper currency formatting. I was immediately impressed, so I accepted the suggested result, and I ended up with the same note, elegantly formatted just like I asked.
The other day a friend pointed out that using ChatGPT (and the like) for automation purposes is making real the original promise of AppleScript — being able to describe automation tasks using natural language. As I wrote long ago, the idea behind AppleScript was noble, but the truth is that it is a programming language, and in practice it has ultimately frustrated everyone. Programmers find it weird and clumsy compared to scripting languages that don’t attempt to hide that they’re programming languages, and non-programmers find it confusing because it doesn’t really parse natural language at all — it only parses a very specific syntax that happens to look like natural language, but isn’t like natural language is used or understood at all.
Here’s the nut of my aforementioned 2005 piece, “The English-Likeness Monster”:
In English, these two statements ought to be considered synonymous:
path of fonts folder of user domain path to fonts folder from user domain
But in AppleScript, they are not, and rather are brittlely dependent on the current context. In the global scope, the StandardAdditions OSAX wants
path to
andfrom user domain
; in a System Events tell block, System Events wantspath of
andof user domain
.The idea was, and I suppose still is, that AppleScript’s English-like facade frees you from worrying about computer-science-y jargon like classes and objects and properties and commands, and allows you to just say what you mean and have it just work.
But saying what you mean, in English, almost never “just works” and compiles successfully as AppleScript, and so to be productive you still have to understand all of the ways that AppleScript actually works. But this is difficult, because the language syntax is optimized for English-likeness, rather than being optimized for making it clear just what the fuck is actually going on. [...]
These prepositional differences are even more exasperating when you consider that
of
andin
are interchangeable in AppleScript. If you can say either of the following to mean the same thing within a System Events tell block:path of fonts folder of user domain path in fonts folder in user domain
and you can say this using StandardAdditions:
path to fonts folder from user domain
then it seems rather natural to assume that the “to” and “from” might be interchangeable with other prepositions as well. But you can’t, and if you’re not aware that StandardAdditions’s “path to” is a single token of two words, it seems rather arbitrary, if not downright random, which prepositions are allowed where.
But LLMs really do just parse natural language. None of that seeming nonsense with some common prepositions working some contexts, but other common prepositions being required in others. That doesn’t mean LLM agents are always capable of doing what you want — far from it — but the best way to try to get them to do what you want is the same, whether you have a computer science degree or have never written a program in your life: describe what you want as clearly as possible in plain natural language. Just try to ask in the most obvious way possible, and that’s the most likely way that it will work, if it can work. That’s remarkable.
Here’s Viticci’s second example:
The second example of ChatGPT and Writing Tools applied to regular MacStories work involves our annual MacStories Selects awards. Before getting together with the MacStories team on a Zoom call to discuss our nominees and pick winners, we created a shared note in Apple Notes where different writers entered their picks. When I opened the note, I realized that I was behind others and forgot to enter the different categories of awards in my section of the document. So I invoked ChatGPT’s Compose menu under a section heading with my name and asked:
Can you add a section with the names of the same categories that John used? Just the names of those categories.
That worked too, leading Viticci to observe:
Years ago, I would have had to do a lot of copying and pasting, type it all out manually, or write a shortcut with regular expressions to automate this process. Now, the “automation” takes place as a natural language command that has access to the contents of a note and can reformat it accordingly.
Like Viticci, I remain largely skeptical and uncomfortable with AI for purposes of generating original new stuff — writing, imagery, whatever. But as an assistive agent, it’s quite remarkable today and improving at a fast clip.
Not only is using Apple Intelligence for automation more accessible (in every sense) than writing a programming script or creating a Shortcut, it’s also something we’re all much more likely to do for a one-time task. I often create scripts, shortcuts, and macros to automate tasks that recur with some frequency; I seldom do for tasks that I’m only going to do once. But why not use Apple Intelligence and ChatGPT to save a few minutes of tedium? ★
Katie Robinson, reporting for The New York Times:
After President-elect Donald J. Trump announced a cascade of cabinet picks last month, the editorial board of The Los Angeles Times decided it would weigh in. One writer prepared an editorial arguing that the Senate should follow its traditional process for confirming nominees, particularly given the board’s concerns about some of his picks, and ignore Mr. Trump’s call for so-called recess appointments.
The paper’s owner, the billionaire medical entrepreneur Dr. Patrick Soon-Shiong, had other ideas.
Hours before the editorial was set to be sent to the printer for the next day’s newspaper, Dr. Soon-Shiong told the opinion department’s leaders that the editorial could not be published unless the paper also published an editorial with an opposing view.
Baffled by his order and with the print deadline approaching, editors removed the editorial, headlined “Donald Trump’s cabinet choices are not normal. The Senate’s confirmation process should be.” It never ran.
I’m not going to keep pointing to the ways Soon-Shiong is debasing the once-great LA Times. Until and if he sells it, which I don’t expect him to do, it’s over. What the LA Times was is gone. That sounds like hyperbole but it’s the obvious truth. One jackass columnist or even a fabulist reporter won’t sink an entire newspaper’s credibility. The Judith Miller reporting on “weapons of mass destruction” in Iraq was a disaster for the New York Times 20 years ago, but while that saga did lasting damage to the NYT’s credibility, it didn’t sink the ship. But an owner like Soon-Shiong can sink the ship. The LA Times isn’t really a newspaper anymore — it’s a vanity rag.
I’m just fantasizing here, but someone with money should consider sweeping into Los Angeles and setting up a rival publication, and poaching all the talent from the Times. I’d have suggested Jeff Bezos until recently, but, well, not anymore. Off the top of my head: Marc Benioff (who now owns Time magazine) or Laurene Powell Jobs (whose Emerson Collective is the majority owner of The Atlantic), perhaps?
The newspaper business, alas, isn’t what it used to be. When it was thriving, local competition would have already been in place. Even small cities had at least two rival papers. Now, New York might be the only city in America left with any true competition between newspapers.
Ev Williams, writing the backstory of, and raison d’être for, Mozi:
And here we are, 20+ years later, with address books full of partial, duplicate, and outdated information. Perhaps the reason for this is that social networks (or the social network) solved this problem — for a while. When Facebook was ubiquitous it was probably a pretty good reflection of many people’s real-life relationships. It told you where they lived, who you knew in common, and all kinds of other details.
Another idea that seemed obvious was that, given how deeply social humans are, social products would dominate the internet. Ten to fifteen years ago, this seemed inevitable.
But something else happened instead.
Social networks became “social media,” which, at first, meant receiving content from people you chose to hear from. But in the quest to maximize engagement, the timeline of friends and people you picked to follow turned into a free-for-all battle for attention. And it turns out, for most people, your friends aren’t as entertaining as (god forbid) influencers who spend their waking hours making “content.”
In other words, social media became … media.
To tell you the truth, I think there are positive aspects of this evolution (perhaps I’ll get into that in another post). But we clearly lost something.
This whole piece is so good, so clear. This distinction between social networking and social media is obvious in hindsight, but only in hindsight. Williams posted it on Medium (natch), but Mozi’s website links directly to it for their “About” page. I’m excited about this. I think they’re on to something here. It’s even a great name.
New app, spearheaded by Ev Williams:
Mozi is a private social network for seeing your people more, IRL. Add your plans, check who’s in town, and know when you overlap.
iOS only at the moment, with “Sign in with Apple” as the only supported authentication method. One clever idea is that you can share travel plans and your location, and Mozi will coordinate when you might be in the same area as a friend. From their FAQ:
Why do you need access to my contacts? Will you ever contact people in my phone book?
Never. We ask for access to your contacts so that you can connect with the people you already know on Mozi. In order to see someone on Mozi, you have to both be on Mozi and both have one another saved as iOS contacts. We never send, sell or share any of your information, and we will never contact your people on your behalf. And instead of storing any actual phone numbers, we hash (encrypt) them. This ensures both your number and all your contacts remain anonymized and protected.
I’m in, and so far only have three mutuals. But — all three of them are people whose in-person company I truly enjoy. We’ve all, correctly, got our guards up regarding new “social” platforms that want our personal information, but we’ve collectively become so cynical that I worry people don’t even want to try fun new things like Mozi. Ev Williams is uniquely placed to make something like this happen in a trustworthy way.
I’m mostly rooting for Mozi to succeed because I think something like this could work in a way that has nothing but upsides, and there’s nothing like it today. But I’m also rooting for Mozi to take off just to burst the absurdity of Kevin Roose’s October piece in the New York Times trying to make the case that Apple “killed social apps” by increasing the privacy controls for our iOS contacts. I gladly shared my whole contacts list with Mozi, based on the track record of the team and the FAQ quoted above.
Speaking of the NYT, Erin Griffith wrote a profile of Williams for the launch of Mozi:
“The internet did make us more connected,” he said in an interview in Menlo Park, Calif. “It just also made us more divided. It made us more everything.”
Mozi is meant to be a utility. If a user wants to message a friend in the app to make plans, the app directs them to the phone’s texting app.
One last Letterman link: a new half-hour interview about interviewing with Zach Baron for GQ. I watched the first minute and I’m saving the rest for tonight:
Baron: If you read pieces about you — pieces of press, profile stuff like that — from the ’80s and ’90s, even a little bit in the 2000s, you were often portrayed as miserable.
Letterman: (laughs uproariously) Yeah, that’s great. I love that.
The New York Times:
George J. Kresge, who as the entertainer the Amazing Kreskin used mentalist tricks to dazzle audiences as he rose to fame on late-night television in the 1970s, died on Tuesday in Wayne, N.J. He was 89. A close friend, Meir Yedid, said the death, at an assisted living facility, was from complications of dementia.
Kreskin’s feats included divining details of strangers’ personal lives and guessing at playing cards chosen randomly from a deck. And he had a classic trick at live shows: entrusting audience members to hide his paycheck in the auditorium, and then relying on his instincts to find it — or else going without payment for a night.
Somehow his first appearance with Letterman wasn’t until 1990, but after that he was a regular. Just a canonical “late night talk show guest” of that era. He was good at the mentalist tricks, but what made Kreskin great — amazing even — was that he was just such a weird, fun, and funny guy.
In addition to two choices for t-shirts, the new DF Paraphernalia store also has the above hoodies, which are pretty nice, I have to say. I particularly like the drawstrings, which are much more substantial, almost rope-like, than the shoelace-like strings on most hoodies. I wear mine a lot, especially in the winter, as an extra layer. You’d look good in one.
Here’s the thing. The store will not be open year-round. We’re taking orders now, printing to meet demand, and then we’re going to close it down. Order tonight or tomorrow, and if you’re in the U.S., yours should arrive before Christmas. International orders — even those ordered by our good neighbors in Canada — most likely will not.
Wayne Ma and Qianer Liu, in a piece today for The Information (paywalled up the wazoo, sadly), “Apple Is Working on AI Chip With Broadcom”:
Apple is developing its first server chip specially designed for artificial intelligence, according to three people with direct knowledge of the project, as the iPhone maker prepares to deal with the intense computing demands of its new AI features. Apple is working with Broadcom on the chip’s networking technology, which is crucial for AI processing, according to one of the people. If Apple succeeds with the AI chip — internally code-named Baltra and expected to be ready for mass production by 2026 — it would mark a significant milestone for the company’s silicon team. [...]
Broadcom typically doesn’t license its intellectual property, choosing instead to sell chips directly to customers. In its arrangement with Google, for instance, Broadcom translates Google’s AI chip blueprints into designs that can be manufactured, oversees its production with TSMC and sells the finished chips to Google at a markup.
But Broadcom appears to be taking a different tack with Apple. Broadcom is providing a more limited scope of design services to Apple while still providing the iPhone maker with its networking technology, one of the people said. Apple is still managing the chip’s production, which TSMC will handle, another person said. Additional details of the business arrangement couldn’t be learned [sic]1
I’ll go out on a limb and say that it’s Apple choosing to take a different tack with Broadcom than Google did, rather than a choice in any way driven by Broadcom. The Information’s own “arrangement with Google” link above points to this year-ago report that opens: “Google executives have extensively discussed dropping Broadcom as a supplier of artificial intelligence chips as early as 2027, according to a person with direct knowledge of the effort. In that scenario, Google would fully design the chips, known as tensor processing units, in-house, the person said. The move could help Google save billions of dollars in costs annually as it invests heavily in AI development, which is especially pricey compared to other types of computing.” Why would Apple ever agree to an arrangement like that?
The hint of obsequiousness to Broadcom suggests to me, pretty clearly, that it’s sources from Broadcom who provided the leaks for this story.
Anyway, what really caught my eye in this report wasn’t the AI server chips, but rather the following (emphasis to key paragraph added), included seemingly only as an aside even though I thought it was the most interesting nugget in the report (vague shades of Fermat’s Last Theorem):
Apple’s silicon design team in Israel is leading development of the AI chip, according to two of the people. That team was instrumental in designing the processors Apple introduced in 2020 to replace Intel chips in Macs.
Apple this past summer canceled the development of a high-performance chip for Macs — consisting of four smaller chips stitched together — to free up some of its engineers in Israel to work on the AI chip, one of the people said, highlighting the company’s shifting priorities.
To make the chip, Apple is planning to use one of TSMC’s most advanced manufacturing processes, known as N3P, said three people with direct knowledge. That would be an improvement over the manufacturing process used for Apple’s latest computer processor, the M4.
What they’re talking about regarding a cancelled high-end Mac chip would be a hypothetical M-series chip with (effectively) double the specs of an Ultra, which I presume would only be available in a future Mac Pro, and, just pulling adjectives from Apple’s marketing dictionary, I’d bet would be called the “M# Extreme” (where “#” is the M-series generation number). The M1 and M2 Ultra chips are, effectively, two M1/M2 Max chips fused together with something called a silicon interposer that offers extremely high-speed I/O between the fused chips. Performance doesn’t exactly double, but it comes close. A hypothetical quad-Max “Extreme” would effectively double the performance of the same-generation Ultra chips. Such a chip, available exclusively in the Mac Pro, would give the Mac Pro a much more obvious reason to exist alongside the Mac Studio (which, to date, has offered Max and Ultra chip configurations).
But if Apple’s work on that quad-interposed M-series chip was cancelled only “this past summer”, and was for a generation of chips using TSMC’s next-generation N3P process, that would mean it was slated for the M5 or M6 generation, not the M4.2 The M4 generation is fabbed using TSMC’s N3E process, and any additional variants beyond the M4 Max, slated for updated Mac Studios and Mac Pros next year, were designed long before this past summer.
I feel like it’s a lock that there will be an M4 Ultra chip next year, with the performance of two M4 Max chips fused together. Or, perhaps the M4 Ultra will be a standalone design, not two Max chips fused. The M-series Max chips have always been their own designs — not two Pro chips fused together. The same could be true for Ultra chips, starting next year, or some generation further into the future.
But I’ve had my fingers crossed that we’ll also see an “M4 Extreme” — or whatever Apple would decide to call a tier above “Ultra” — sooner rather than later. If The Information’s reporting is correct, however, either we’ll see a quad-Max M4 chip next year, and then it will skip a generation because the engineering team was redirected to work on these AI server chips, or, those engineers were working on the first quad-Max M-series chips, and now the first such M-series chips have been punted even further into the future, if ever. Today’s report has me thinking, sadly, that could be a few years off, at the soonest. ★
That sic is for the missing sentence-ending period. I expect better copy editing from a $400/year subscription (soon going to $500) that keeps badgering me, every time I visit the site, to upgrade to a $1,000/year “Pro” subscription tier. But while I’m slagging on The Information for this sentence, the missing period is the least of its problems. “Additional details of the business arrangement couldn’t be learned” is some passive voice bullshit. What they mean is that Wayne Ma and Qianer Liu were unable to learn any additional details, not that additional details of the business arrangement between Apple and Broadcom are some sort of unknowable information — you know, like the answer to why I continue paying so much money to subscribe to a publication that annoys me. ↩︎
Or even the M7 generation. The lead times on chip designs are measured in years, plural. Back in July 2023, just after the release of the M2-generation Mac Studio models (offering the M2 Max and M2 Ultra) and the first — and so far only — Apple silicon Mac Pro (M2 Ultra), Jason Snell and Myke Hurley got the following tidbit from an anonymous listener of their podcast Upgrade (episode 468; transcript). Hurley read it on air, right up front around the 4:00 mark:
I am an Apple engineer working on the GPU team.
It pains me to say that Jason’s speculation is correct. The quad chip has been canned with no plans to return. For context, we are actively developing what will presumably be the M5 chip. And the quad chip was only ever specced for the M1 and removed late in the project. There are no plans to create a quad chip through at least the M7 generation. My understanding is that the quad required too much effort for too small of a market. Something interesting that may come in the M8 and future generations is called multi-die packaging. This allows the CPU and GPU parts of the chip to be fabricated on different dies and packaged together much like how two max chips make an ultra. With this design, it is conceivable that we could have three, four, or five or more GPU dies with one or two CPU for a graphics powerhouse or vice versa for a CPU workstation that doesn’t need as much GPU grunt. However, as far as I know, no such plans exist yet.
Take that with however many grains of salt you think necessary to season a comment from an anonymous person, but it doesn’t hit a single false note to my ears. And if this little Upgrade birdie was legit, that would suggest that the Israeli chip engineers reassigned from an advanced 4× Mac chip this past summer to work on a new AI server chip would have been working on the M6 generation of Apple silicon, for products launching in 2026–2027. ↩︎︎
Erik Hayden, reporting for The Hollywood Reporter:
For his next move, David Letterman is jumping in to the increasingly crowded free, ad-supported TV channel (FAST) space.
The late-night great’s production company Worldwide Pants has inked a deal with Samsung TV Plus to bring around 4,000 hours of original video to the company’s streaming service, the firms said Wednesday. “I’m very excited about this,” stated Letterman, who glibly added, “Now I can watch myself age without looking in the mirror!”
The output for the 24/7 on-demand channel titled Letterman TV appears to rely heavily on archival clips from his nearly 33-year late-night run, including his CBS Late Show Top Ten lists, “Stupid Tricks” segments, interviews with stars, holiday specials and behind-the-scenes clips along with fresh commentary from Letterman, presumably on all the above.
I don’t know how different this will be from Letterman’s excellent YouTube channel, but honest to god I’d never even heard of “Samsung TV Plus” until reading this.
Juli Clover at MacRumors:
Apple today made a mistake with its macOS Sequoia 15.2 update, releasing the software for two Macs that have yet to be launched. There is a software file for “Mac16,12” and “Mac16,13,” which are upcoming MacBook Air models.
The leaked software references the “MacBook Air (13-inch, M4, 2025)” and the “MacBook Air (15-inch, M4, 2025),” confirming that new M4 MacBook Air models are in development and are likely not too far off from launching.
It’s been widely rumored that Apple is working to bring the M4 chips to its entire Mac lineup, and the MacBook Air is expected to get an M4 refresh in the spring of 2025, so sometime between March and June.
Were these references not in the 15.2 betas? If not, what a weird mistake to happen only in the release builds. But regardless, even inside Apple, I’d file this under “no big whoop”. Of course there are going to be M4-based MacBook Airs next year. The only question is when. My guess is March, just like last year.
Update: Via Mr. Macintosh, it appears the leak came from IPSW builds, which contain a list of Mac models the IPSW can be used to restore.
David Ingram, reporting for NBC News:
U.S. Bankruptcy Judge Christopher Lopez said after a two-day hearing that The Onion’s parent company, Global Tetrahedron, had not submitted the best bid and was wrongly named the winner of an auction last month by a court-appointed trustee.
“I don’t think it’s enough money,” Lopez said in a late-night ruling from the bench in a Houston court. “I’m going to not approve the sale.”
It’s not over ’til it’s over.
Brandon Silverman:
It was September of 2011 and I saw a link on kottke.org to a small collection of incredible typography from something called the Sanborn Fire Insurance Maps. I had never seen them before and they blew my mind. I immediately became a massive fan and in fact, when I got married, my wife and I designed our wedding invitation based off of them.
However, there has never been a place to see all of the art from the maps in one place. Until now.
This website is a free archive dedicated exclusively to creating a one-stop shop for all the incredible typography and art of the Sanborn maps. It includes almost 3,500 unique decorative titles, all drawn before 1923. While large portions of the original maps have been digitized and archived in various places both online and offline, there has never been a comprehensive collection of all of the decorative titles from the Sanborn maps. I hope you enjoy!
I just love this style of turn-of-the-century typography and graphic design. (The last turn of the century, that is.) In our era, this style has been used to wonderful effect by the great Chris Ware.
Via, no surprise, Kottke. What comes around goes around.
Finally, Daring Fireball t-shirts and hoodies are back. Order now, and we’ll start printing shirts at the end of this week. U.S. domestic orders placed by the end of the day Wednesday should arrive before Christmas. International orders — even those ordered by our good neighbors in Canada — most likely will not.
Mark Gurman, in his Power On column for Bloomberg:
Apple is now working on a major effort to support third-party hand controllers in the device’s visionOS software and has teamed up with Sony Group Corp. to make it happen. Apple approached Sony earlier this year, and the duo agreed to work together on launching support for the PlayStation VR2’s hand controllers on the Vision Pro. Inside Sony, the work has been a monthslong undertaking, I’m told. And Apple has discussed the plan with third-party developers, asking them if they’d integrate support into their games. [...]
One hiccup is that Sony doesn’t currently sell its VR hand controllers as a standalone accessory. The company would need to decouple the equipment from its own headset and kick off operations to produce and ship the accessory on its own. As part of the arrangement, Sony would sell the controllers at Apple’s online and retail stores, which already offer PS5 versions.
My thanks to 1Password — which, earlier this year, acquired frequent DF sponsor Kolide — for sponsoring last week at DF. Imagine if you went to the movies and they charged $8,000 for popcorn. Or, imagine you got on a plane and they told you that seatbelts were only available in first class. Your sense of outraged injustice would probably be something like what IT and security professionals feel when a software vendor hits them with the dreaded SSO tax — the practice of charging an outrageous premium for Single Sign-On, often by making it part of a product’s “enterprise tier”. The jump in price can be astonishing — one CRM charges over 5000% more for the tier with SSO. At those prices, only very large companies can afford to pay for SSO. But the problem is that companies of all sizes need it.
Until outraged customers can shame vendors into getting rid of the tax, many businesses have to figure out how to live without SSO. For them, the best route is likely to be a password manager, which also reduces weak and re-used credentials, and enables secure sharing across teams. And a password manager is likely a good investment anyway, for apps that aren’t integrated with SSO. To learn more about the past, present, and future of the SSO tax, read 1Password’s full blog post.
While there is no subscription offering for Daring Fireball (never say never again), I am reminded this week to remind you that, if you enjoy podcasts, you should subscribe to Dithering, the twice-weekly 15-minutes-on-the-button podcast I do with Ben Thompson. Dithering as a standalone subscription costs just $7/month or $70/year. People who try Dithering seem to love it, too — we have remarkably little churn.
Recording the show often helps me coagulate loose ideas into fully-formed thoughts. Both my Tuesday column on Intel’s decline and today’s on using generative AI for research were inspired by our discussion on the show the night before. I toss a lot of takes out on Dithering that never make it here, though. If you’re on the fence, subscribe for a month and you’re only out $7 — but I bet you’ll stick around. Trust me. And thanks to everyone who’s already subscribed.
Late-breaking candidate for best new font of 2024.
Elizabeth Lopatto, writing for The Verge, “Stop Using Generative AI as a Search Engine”:
Now, a defender of AI might — rightly — say that a real journalist should check the answers provided by ChatGPT; that fact-checking is a critical part of our job. I agree, which is why I’ve walked you through my own checking in this article. But these are only the public and embarrassing examples of something I think is happening much more often in private: a normal person is using ChatGPT and trusting the information it gives them.
A mistake, obviously.
One advantage old-school Google Search has over the so-called answer engines is that it links directly to primary sources. Answer engines just give you an answer, and it’s often unclear what the source is. For me, using ChatGPT or Google’s AI function creates extra work — I have to go check the answer against a primary source; old Google Search just gave me that source directly.
Lopatto’s piece was prompted by a spate of historical bullshit people have been inadvertently propagating, after their asking generative AI systems for historical examples of presidents granting pardons to family members. Most notably, a column by Charles P. Pierce at Esquire this week — now fully retracted — the entire premise of which was a supposed pardon granted by George H.W. Bush to his black-sheep son Neil Bush. No such pardon was granted.1
Lopatto’s piece is excellent, particularly the way she shows her own work. And the entire premise of her piece is that people are, in fact, embarrassing themselves (in Pierce’s case, spectacularly) and inadvertently spreading misinformation by blindly trusting the answers they’re getting from generative AI models. But I think it’s wrong to argue flatly against the use of generative AI for research, as she does right in her headline. I’ve been late to using generative AI as anything other than a toy curiosity, but in recent months I’ve started using it for work-related research. And now that I’ve started, I’m using it more and more. My basic rule of thumb is that if I’m looking for an article or web page, I use web search (Kagi); if I’m looking for an answer to a question, though, I use ChatGPT (4o). I direct (and trust) ChatGPT as I would a college intern working as a research assistant. I expect accuracy, but assume that I need to double-check everything.
Here’s how I prompted ChatGPT, pretending I intended to write about this week’s political controversy du jour:
Give me a list of U.S. presidential pardons granted to family members, friends, administration officials, and cronies. Basically I’m looking for a list of controversial pardons. I’m interested in the totality of U.S. history, but particularly in recent history, let’s say the last 100 years.
ChatGPT 4o’s response was good: here’s a link to my chat, and an HTML transcript and a screenshot. (Only the screenshot shows where ChatGPT included sources.) I’m quite certain ChatGPT’s response is completely true, and it strikes me as a fair summary of the most controversial pardons in my lifetime. My biggest quibble is that it omits Trump’s pardon of Steve Bannon, a truly outrageous pardon of a genuine scumbag who was an official White House advisor. (Bannon was indicted for a multi-million dollar scheme in which he scammed thousands of political donors into believing they were contributing funds to help build Trump’s fantasy “border wall”.) However, my asking “Any more from Trump?” as a follow-up resulted in a longer list of 13 pardons, all factual, that included Bannon.2
I want to make clear that I don’t think Lopatto is in any way a head-in-the-sand Luddite. But all of the arguments being made today against using generative AI to answer questions sound exactly like the arguments against citing web pages as sources in the 1990s. The argument then was basically “Anyone can publish anything on the web, and even if a web page is accurate today, it can be changed at any time” — which was true then and remains true today.3 But it’s just a new technology — one that isn’t going anywhere because it’s incredibly useful in ways nothing else is, but its inherent downsides will force us to adapt and learn new ways of sourcing, citing, and verifying information. The rise of the web didn’t make libraries go away. Generative AI won’t make web search go away.
If I had wanted to write a column about presidential pardons, I’d find ChatGPT’s assistance a far better starting point than I’d have gotten through any general web search. But to quote an adage Reagan was fond of: “Trust, but verify.” ★
Worth noting this from Lopatto: “I emailed Hearst to ask if Esquire writer Charles P. Pierce had used ChatGPT as a source for his article. Spokesperson Allison Keane said he hadn’t and declined to say anything further about how the error might have occurred.” I find it unlikely that generative AI wasn’t involved somewhere in the chain of this falsehood that Bush pardoned his son, but whatever Pierce referenced to come upon it, he fucked up good. ↩︎
One small curiosity is that ChatGPT’s list, while mostly chronological, swapped Carter and Ford. One small amusement is that the only supposedly controversial pardon ChatGPT came up with for Ronald Reagan was New York Yankees owner George Steinbrenner. A complicated man, The Boss was. ↩︎︎
Who’s to say a dog doesn’t have useful information to provide? ↩︎︎
Purely fun, pay-whatever-you-think-fair app for the Mac from Simon Støvring (developer of numerous fine apps such as Runestone and Scriptable):
Festivitas automatically adds festive lights to your menu bar and dock upon launch and you can tweak their appearance to match your preferences.
There is something very core to the Mac’s origins about not just making a software toy like this, but putting effort into making everything about it really nice. Harks back to Steven Halls’s The Talking Moose and, of course, the undisputed king of the genre, Eric Shapiro’s The Grouch. Oh, and of course (thanks to Stephen Hackett for the reminder), Holiday Lights.
Update, Friday 6 December: Today’s 1.1 update brings several improvements, including making the lights look way cooler if your Dock is on the left or right (as god intended).
David Frum, writing at The Atlantic, regarding his jarring appearance as a guest on MSNBC’s Morning Joe:
Before getting to the article, I was asked about the nomination of Pete Hegseth as secretary of defense — specifically about an NBC News report that his heavy drinking worried colleagues at Fox News and at the veterans organizations he’d headed. [...] I answered by reminding viewers of some history:
In 1989, President George H. W. Bush nominated John Tower, senator from Texas, for secretary of defense. Tower was a very considerable person, a real defense intellectual, someone who deeply understood defense, unlike the current nominee. It emerged that Tower had a drinking problem, and when he was drinking too much he would make himself a nuisance or worse to women around him. And for that reason, his nomination collapsed in 1989. You don’t want to think that our moral standards have declined so much that you can say: Let’s take all the drinking, all the sex-pesting, subtract any knowledge of defense, subtract any leadership, and there is your next secretary of defense for the 21st century.
I told this story in pungent terms. It’s cable TV, after all. And I introduced the discussion with a joke: “If you’re too drunk for Fox News, you’re very, very drunk indeed.”
At the next ad break, a producer spoke into my ear. He objected to my comments about Fox and warned me not to repeat them. I said something noncommittal and got another round of warning. After the break, I was asked a follow-up question on a different topic, about President Joe Biden’s pardon of his son. I did not revert to the earlier discussion, not because I had been warned, but because I had said my piece. I was then told that I was excused from the studio chair. Shortly afterward, co-host Mika Brzezinski read an apology for my remarks.
Jesus. The abject obsequiousness is staggering. Yes, it’s a joke at Fox News’s expense. But Fox News — on-air — has indeed been backing Hegseth’s nomination, even though it’s quite obvious that everyone who works there knows he has an alcohol problem. From that NBC News report (note that despite their names, the MSNBC and NBC News newsrooms are no longer associated):
Pete Hegseth, President-elect Donald Trump’s pick for defense secretary, drank in ways that concerned his colleagues at Fox News, according to 10 current and former Fox employees who spoke with NBC News. Two of those people said that on more than a dozen occasions during Hegseth’s time as a co-host of Fox & Friends Weekend, which began in 2017, they smelled alcohol on him before he went on air. Those same two people, plus another, said that during his time there he appeared on television after they’d heard him talk about being hungover as he was getting ready or on set.
One of the sources said they smelled alcohol on him as recently as last month and heard him complain about being hungover this fall. None of the sources with whom NBC News has spoken could recall an instance when Hegseth missed a scheduled appearance because he’d been drinking. “Everyone would be talking about it behind the scenes before he went on the air,” one of the former Fox employees said.
Note too that Fox & Friends Weekend airs at 6:00 in the morning.
Oliver Darcy, in a well-sourced report at Status (paywalled, alas, but with a preview of the article if you sign up for the free version of his newsletter, which I agree is sort of a “Yeah, no thanks” offer):
Patrick Soon-Shiong is tightening his grip over the Los Angeles Times. The MAGA-curious owner, who drew controversy when he blocked the newspaper’s planned endorsement of Kamala Harris, has waded further into its operations since the November election, according to new information I have learned and public remarks the billionaire made Wednesday during a media appearance with right-wing personality Scott Jennings. [...] Several veteran staffers told me that morale has never been lower, with some people even wondering whether the newspaper will be disfigured beyond recognition under this new era of Soon-Shiong’s reign. [...]
One disturbing example came after the newspaper published an opinion piece in November about Elon Musk that Soon-Shiong did not care for, people familiar with the matter told me. The piece, written by Times opinion contributor Virginia Heffernan, carried the headline, “Elon Musk bought himself a starring role in Trump’s second term. What could go wrong?”
While the headline seemed innocuous, Soon-Shiong expressed dismay over it, according to the people familiar with the matter. The headline was allowed to remain unchanged. But, as a result, the people said, a new rule was put into place: Prior to publishing opinion stories, the headlines must be emailed over to Soon-Shiong, where he can then choose to weigh in. While it is normal for newspaper owners to influence the opinion wing of a newspaper, it is highly unusual for an owner to have article headlines sent to them ahead of publication for review.
That also seems like a lot of work for a busy billionaire. Wonder how he might handle that?
Speaking to Jennings as the latter hosted a radio show Wednesday, the billionaire revealed that, behind the scenes, he is working on developing a “bias meter” powered by artificial intelligence that will be placed on both opinion and news stories. Soon-Shiong said that the hope is to roll out the new feature, which will use the technology to seemingly warn readers that his own reporters are biased, as early as next month. [...]
Suffice to say, but when the journalists at the Times heard the “breaking news” that Soon-Shiong delivered to Jennings, they spiraled even further. “People are now deeply fucking concerned,” one staffer bluntly told me Wednesday night.
What could go wrong?
In response, the LAT Guild issued a statement, concluding:
The statements of Dr. Soon-Shiong in the press and on social media reflect his own opinions and do not shape reporting by our member-journalists.
Our members — and all Times staffers — abide by a strict set of ethics guidelines, which call for fairness, precision, transparency, vigilance against bias, and an earnest search to understand all sides of an issue. Those longstanding principles will continue guiding our work.
The Guild has secured strong ethics protections for our members, including the right to withhold one’s byline, and we will firmly guard against any effort to improperly or unfairly alter our reporting.
Stephanie Palazzolo, writing for The Information (paywalled, alas):
Researchers at OpenAI believe that some rival AI developers are training their reasoning models by using OpenAI’s o1 reasoning models to generate training data, according to a person who has spoken to the company’s researchers about it. In short, the rivals can ask the o1 models to solve various problems and then use the models’ chain of thought — the “thought process” the models use to solve those problems — as training data, the person said.
You might be wondering how rival developers can do that. OpenAI has explicitly said it hides its reasoning models’ raw chains of thought due in part to competitive concerns.
But in answering questions, o1 models include a summarized version of the chain of thought to help the customer understand how the models arrived at the answer. Rivals can simply ask another LLM to take that summarized chain of thought and predict what the raw chain of thought might have been, the person who spoke with the researchers said.
And I’m sure these OpenAI researchers are happy to provide this training data to competitors, without having granted permission, in the same way they trained (and continue to train) their own models on publicly available web pages, without having been granted permission. Right?
From The Stanford Review editor-in-chief Julia Steinberg’s interview with university president Jonathan Levin:
Stanford Review: What is the most important problem in the world right now?
President Levin: There’s no answer to that question. There are too many important problems to give you a single answer.
Stanford Review: That is an application question that we have to answer to apply here.
Alex Heath, writing at The Verge:
“I’m actually very optimistic this time around,” Bezos said of Trump during a rare public appearance at The New York Times DealBook Summit on Wednesday. “He seems to have a lot of energy around reducing regulation. If I can help him do that, I’m going to help him.”
Trump railed against Bezos and his companies — Amazon, Blue Origin, and The Washington Post — during his 2016 term. Bezos defended himself but it did little to help his reputation with Trump. Now, his companies have a lot at stake in the coming administration, from the FTC’s antitrust lawsuit against Amazon to Blue Origin’s efforts to compete with SpaceX for government contracts.
Onstage at the DealBook Summit on Wednesday, Bezos called Trump “calmer this time” and “more settled.” He said he will try to “talk him out of” the idea that the press, which includes The Washington Post, is an enemy of the people.
“You’ve probably grown in the last eight years,” he said to DealBook’s Andrew Ross Sorkin. “He has, too.”
Next up after Bezos at DealBook Summit was Charlie Brown, who professed optimism regarding his next attempt at kicking a football held by Lucy Van Pelt. What the fuck did they put in the water at this conference?
Or, perhaps, these very smart guys are also craven, and these nonsensical remarks, which are quite obviously contrary to reality, are simply additional exhibits of shameful cowardly compliance.
While writing the previous item regarding the FBI encouraging the use of E2EE text and call protocols, I wound up at the Play Store page for Google Messages. It’s shamefully misleading regarding Google Messages’s support for end-to-end encryption. As I wrote in the previous post, Google Messages does support E2EE, but only over RCS and only if all participants in the chat are using a recent version of Google Messages. But the second screenshot in the Play Store listing flatly declares “Conversations are end-to-end encrypted”, full stop. That is some serious bullshit.
I realize that “Some conversations are end-to-end encrypted” will naturally spur curiosity regarding which conversations are encrypted and which aren’t, but that’s the truth. And users of the app should be aware of that. “RCS conversations with other Google Messages users are encrypted” would work.
Then, in the “report card” section of the listing, it states the following:
Data is encrypted in transit
Your data is transferred over a secure connection
Which, again, is only true sometimes. It’s downright fraudulent to describe Google Messages’s transit security this way. Imagine a typical Android user without technical expertise who takes the advice (now coming from the FBI) to use end-to-end encryption for their messaging. A reasonable person who trusts Google would look at Google’s own description of Google Messages and conclude that if you use Google Messages, all your messages will be secure. That’s false. And depending who you communicate with — iPhone users, Android users with old devices, Android users who use other text messaging apps — it’s quite likely most of your messages won’t be secure.
Just be honest! The E2EE between Google Messages users using Android phones that support RCS is completely seamless and automatic (I just tried it myself using my Android burner), but E2EE is never available for SMS, and never available if a participant in the chat is using any RCS client (on Android or Apple Messages) other than Google Messages. That’s an essential distinction that should be made clear, not obfuscated.
While I’m at it, it’s also embarrassing that Google Voice has no support for RCS at all. It’s Google’s own app and service, and Google has been the world’s most vocal proponent of RCS messaging.
Lastly, I also think it’s a bad idea that Google Messages colors all RCS message bubbles with the exact same colors (dark blue bubbles with white text, natch). SMS messages, at least on my Pixel 4, are pale blue with black text. Google Messages does put a tiny lock in the timeline to indicate when an RCS chat is secure, and they also put a lock badge on the Send button’s paper airplane icon, so there are visual indications whether an RCS chat is encrypted, but because the messages bubble colors are the same for all RCS chats, it’s subtle, not instantly obvious like it is with Apple Messages, where green means “SMS or RCS, never encrypted” and blue means “iMessage, always encrypted”.
Kevin Collier, reporting for NBC News:
Amid an unprecedented cyberattack on telecommunications companies such as AT&T and Verizon, U.S. officials have recommended that Americans use encrypted messaging apps to ensure their communications stay hidden from foreign hackers.
The hacking campaign, nicknamed Salt Typhoon by Microsoft, is one of the largest intelligence compromises in U.S. history, and it has not yet been fully remediated. Officials on a news call Tuesday refused to set a timetable for declaring the country’s telecommunications systems free of interlopers. Officials had told NBC News that China hacked AT&T, Verizon and Lumen Technologies to spy on customers.
A spokesperson for the Chinese Embassy in Washington did not immediately respond to a request for comment.
In the call Tuesday, two officials — a senior FBI official who asked not to be named and Jeff Greene, executive assistant director for cybersecurity at the Cybersecurity and Infrastructure Security Agency — both recommended using encrypted messaging apps to Americans who want to minimize the chances of China’s intercepting their communications.
“Our suggestion, what we have told folks internally, is not new here: Encryption is your friend, whether it’s on text messaging or if you have the capacity to use encrypted voice communication. Even if the adversary is able to intercept the data, if it is encrypted, it will make it impossible,” Greene said.
It seems kind of new for the FBI to call encryption “our friend”, but now that I think about it, their beef over the years has primarily been about gaining access to locked devices, not eavesdropping on communication protocols. Their advocacy stance on device encryption has not changed — they still want a “back door for good guys” there. Their thinking, I think, is that E2EE communications are a good thing because they protect against remote eavesdropping from foreign adversaries — exactly like this campaign waged by China. The FBI doesn’t need to intercept communications over the wire. When the FBI wants to see someone’s communications, they get a warrant to seize their devices. That’s why the FBI wants device back doors, but are now encouraging the use of protocols that are truly E2EE. But that’s not to say that law enforcement agencies worldwide don’t still fantasize about mandatory “back doors for good guys”.
Here’s a clunker of a paragraph from this NBC News story, though:
Privacy advocates have long advocated using end-to-end encrypted apps. Signal and WhatsApp automatically implement end-to-end encryption in both calls and messages. Google Messages and iMessage also can encrypt calls and texts end to end.
It’s true that both voice and text communications over Signal and WhatsApp are always secured with end-to-end encryption. But Google Messages is an Android app that only handles text messaging via SMS and RCS, not voice. There’s a “Call” button in Google Messages but that just dials the contact using the Phone app — just a plain old-fashioned unencrypted phone call. (There’s a Video Call button in Google Messages, but that button tries to launch Google Meet.) Some text chats in Google Messages are encrypted, but only those using RCS in which all participants are using a recent version of Google Messages. Google Messages does provide visual indicators of the encryption status of a chat. The RCS standard has no encryption; E2EE RCS chats in Google Messages use Google’s proprietary extension and are exclusive to the Google Messages app, so RCS chats between Google Messages and other apps, most conspicuously Apple Messages, are not encrypted.
iMessage is not an app. It is Apple’s proprietary protocol, available within its Messages app. The entire iMessage protocol was built upon end-to-end encryption — all iMessage messages have been E2EE from the start. Apple also offers FaceTime for voice and video calls, and FaceTime calls are always secured by E2EE.
A few nuggets of wisdom from Andy Grove, in an interview with Esquire after he retired as Intel’s CEO, but still served as chairman:
Profits are the lifeblood of enterprise. Don’t let anyone tell you different.
You must understand your mistakes. Study the hell out of them. You’re not going to have the chance of making the same mistake again — you can’t step into the river again at the same place and the same time — but you will have the chance of making a similar mistake.
Status is a very dangerous thing. I’ve met too many people who make it a point of pride that they never take money out of a cash machine, people who are too good to have their own e-mail address, because that’s for everybody else but not them. It’s hard to fight the temptation to set yourself apart from the rest of the world.
Grove, still serving as CEO during Intel’s zenith in 1997, didn’t even have an office. He worked out of an 8x9-foot cubicle.
What you’re seeing today is a very, very rapid evolution of an industry where the milieu is better understood by people who grew up in the same time frame as the industry. A lot of the years that many of us have spent in business before this time are of only limited relevance.
This industry is not like any other. Computers don’t get incrementally more powerful; they get exponentially more powerful.
The Verge’s Sean Hollister penned an excellent high-level summary of Pat Gelsinger’s ignominious ouster from Intel, under the headline “What Happened to Intel?” A wee bit of pussyfooting here, though, caught my eye:
Just how bad was it before Gelsinger took the top job?
Not great! There were bad bets, multiple generations of delayed chips, quality assurance issues, and then Apple decided to abandon Intel in favor of its homegrown Arm-based chips — which turned out to be good, seriously showing up Intel in the laptop performance and battery life realms. We wrote all about it in “The summer Intel fell behind.”
Intel had earlier misses, too: the company long regretted its decision not to put Intel inside the iPhone, and it failed to execute on phone chips for Android handsets as well. It arguably missed the boat on the entire mobile revolution.
There’s no argument about it. Intel completely missed mobile. iPhones never used Intel chips and Apple Silicon chips are all fabbed by TSMC. Apple’s chips are the best in the industry, also without argument, and the only mobile chips that can be seen as reasonable competition are from Qualcomm (and maybe Samsung). Intel has never been a player in that game, and it’s a game Intel needed not only to be a player in, but to dominate.
It’s not just that smartphones are now a bigger industry than the PC industry ever was, and that Intel has missed out on becoming a dominant supplier to phone makers. That’s bad, but it’s not the worst of it. It’s that those ARM-based mobile chips — Apple Silicon and Qualcomm’s Snapdragon lineup — got so good that they’re now taking over large swaths of the high end of the PC market. Partly from an obsessive focus on performance-per-watt efficiency, partly from the inherent advantages of ARM’s architecture, partly from engineering talent and strategy, and partly from the profound benefits of economies of scale as the mobile market exploded. Apple, as we all know, moved the entire Mac platform from Intel chips to Apple Silicon starting in 2020. The Mac “only” has 15 percent of the worldwide PC market, but the entirety of the Mac’s market share is at the premium end of the market. Losing the Mac was a huge loss for Intel. And now Qualcomm and Microsoft are pushing Windows laptops to ARM chips too, for the same reasons: not just performance-per-watt, but sheer performance. x86 CPUs are still dominant on gaming PCs, but even there, AMD is considered the cream of the crop.
Of all companies, Intel should have seen the potential for this to happen. Intel did not take “phone chips” seriously, but within a decade, those ostensibly toy “phone chips” were the best CPUs in the world for premium PC laptops, and their efficiency advantages make them advantageous in data centers too. And Apple has shown that they’re even superior for workstation-class desktops. That’s exactly how Intel became Intel back at the outset of the personal computing revolution. PCs were seen as mere toys by the “real” computer makers of the 1970s and early 1980s. IBM was caught so flatfooted that when they saw the need to enter the PC market, they went to Intel for the chips and Microsoft for DOS — decisions that both Intel and Microsoft capitalized upon, resulting in a tag-team hardware/software dominance of the entire computing industry that lasted a full quarter century, while IBM was left sidelined as just another maker of PCs. From Intel’s perspective, the x86 platform went from being a “toy” to being the dominant architecture for everything from cheap laptops all the way up to data-center-class servers.
ARM-based “phone chips” did the same thing to x86 that Intel’s x86 “PC chips” had done, decades earlier, to mainframes. Likewise, Nvidia turned “graphics cards for video game enthusiasts” — also once considered mere toys — into what is now, depending on stock market fluctuations, the most valuable company in the world. They’re neck and neck with the other company that pantsed Intel for silicon design leadership: Apple. Creating “the world’s best chips” remains an incredible, almost unfathomably profitable place to be as a business. Apple and Nvidia can both say that about the very different segments of the market in which their chips dominate. Intel can’t say that today about any of the segments for which it produces chips. TSMC, the company that fabs all chips for Apple Silicon and most of Nvidia’s leading chips, is 9th on the list of companies ranked by market cap, with a spot in the top 10 that Intel used to occupy. Today, Intel is 180th — and on a trajectory to fall out of the top 200.
Intel never should have been blithe to the threat. The company’s longtime CEO and chairman (and employee #3) Andy Grove titled his autobiography Only the Paranoid Survive. The full passage from which he drew the title:
Business success contains the seeds of its own destruction. Success breeds complacency. Complacency breeds failure. Only the paranoid survive.
Grove retired as CEO in 1998 and as chairman in 2005. It’s as though no one at Intel after him listened to a word he said. Grove’s words don’t read merely as advice — they read today as a postmortem synopsis for Intel’s own precipitous decline over the last 20 years. ★
Nilay Patel:
So many of you like The Verge that we’ve actually gotten a shocking number of notes from people asking how they can pay to support our work. It’s no secret that lots of great websites and publications have gone under over the past few years as the open web falls apart, and it’s clear that directly supporting the creators you love is a big part of how everyone gets to stay working on the modern internet.
At the same time, we didn’t want to simply paywall the entire site — it’s a tragedy that traditional journalism is retreating behind paywalls while nonsense spreads across platforms for free. We also think our big, popular homepage is a resource worth investing in. So we’re rethinking The Verge in a freemium model: our homepage, core news posts, Decoder interview transcripts, Quick Posts, Storystreams, and live blogs will remain free. We know so many of you depend on us to curate the news every day, and we’re going to stay focused on making a great homepage that’s worth checking out regularly, whether you pay us or not.
Our original reporting, reviews, and features will be behind a dynamic metered paywall — many of you will never hit the paywall, but if you read us a lot, we’ll ask you to pay.
This sounds like an extremely well-considered balance between keeping much of the site open to all, allowing metered access to a limited number of premium articles free of charge, and creating a new sustainable revenue stream from subscribers. Bravo.
Count me in as a day one subscriber.
Christopher Mims, writing for The Wall Street Journal (News+):
The company’s core business is under siege. People are increasingly getting answers from artificial intelligence. Younger generations are using other platforms to gather information. And the quality of the results delivered by its search engine is deteriorating as the web is flooded with AI-generated content. Taken together, these forces could lead to long-term decline in Google search traffic, and the outsize profits generated from it, which prop up its parent company Alphabet’s money-losing bets on things like its Waymo self-driving unit.
The first danger facing Google is clear and present: When people want to search for information or go shopping on the internet, they are shifting to Google’s competitors, and advertising dollars are following them. In 2025, eMarketer projects, Google’s share of the U.S. search-advertising market will fall below 50% for the first time since the company began tracking it.
The accompanying chart (“Estimated share of U.S. search advertising revenue”) suggests Google’s decline has been Amazon’s gain. Basically, Google may still dominate the market for general web search, but people more and more are searching using apps and services that aren’t (or aren’t only) general web search engines. And the reason why is that Google web search has gotten worse.
Special guest Allen Pike joins the show to talk about the state of generative AI and how Apple Intelligence measures up (so far). Also: some speculation on Apple’s pending acquisition of the ever-difficult-to-pronounce Pixelmator.
Sponsored by:
Amazon is running a holiday discount on M3 MacBook Airs, but it’s tricky — you need to click around through various color choices and watch the prices and ship dates. My main link on this post goes to the config that looks like their best deal for price-conscious gift buyers: the 13-inch M3 MacBook Air in space gray, with 24 GB RAM and 512 GB of storage for $1,299, a $200 discount from the list price, with delivery in a few days. They’ve also got the same configuration, at the same price, with the same delivery window in silver. Starlight only has “5 remaining in stock” (and that was at 8 just a few minutes ago, so they’ll likely be gone by the time you read this), and midnight is already out of stock.
The 13-inch configuration with 16 GB RAM and 512 GB storage is just $1,099, but delivery dates are in early January. They’ve got the configuration with 16 GB RAM and 256 GB storage for just $899, but only in midnight and starlight, and with delivery windows of “1 to 2 months”.
The best option for 15-inch M3 MacBook Airs is the configuration with 24 GB RAM and 512 GB storage for $1,424 — a $275 discount from the regular price of $1,699. That’s available at that price, with next-week delivery, in all four colors. They’ve also got $200 discounts on various configurations with 16 GB RAM, but delivery on those models is out in January.
Needless to say, all of these links are using my make-me-rich affiliate code. And Amazon still has USB-C AirPods Pro 2 for just $154, almost $100 off the regular price.
Ian King, Liana Baker, and Ryan Gould, reporting for Bloomberg:*
Intel Corp. Chief Executive Officer Pat Gelsinger was forced out after the board lost confidence in his plans to turn around the iconic chipmaker, adding to turmoil at one of the pioneers of the technology industry.
The clash came to a head last week when Gelsinger met with the board about the company’s progress on winning back market share and narrowing the gap with Nvidia Corp., according to people familiar with the matter. He was given the option to retire or be removed, and chose to announce the end of his career at Intel, said the people, who declined to be identified discussing proceedings that were not made public.
Intel Chief Financial Officer David Zinsner and Michelle Johnston Holthaus are serving as interim co-CEOs while the board searches for Gelsinger’s replacement, the company said in a statement. Frank Yeary, independent chair of the board of Intel, will serve as interim executive chair.
See also: Techmeme’s roundup.
* Bloomberg, of course, is the publication that published “The Big Hack” in October 2018 — a sensational story alleging that data centers of Apple, Amazon, and dozens of other companies were compromised by China’s intelligence services. The story presented no confirmable evidence at all, was vehemently denied by all companies involved, has not been confirmed by a single other publication (despite much effort to do so), and has been largely discredited by one of Bloomberg’s own sources. By all appearances “The Big Hack” was complete bullshit. Yet Bloomberg has issued no correction or retraction, and their only ostensibly substantial follow-up contained not one shred of evidence to back up their allegations. Bloomberg seemingly hopes we’ll all just forget about it. I say we do not just forget about it. Everything they publish should be treated with skepticism until they retract “The Big Hack” or provide evidence that any of it was true.
My thanks to Crunchy Bagel — the company of developer Quentin Zervaas — for sponsoring this week’s DF RSS feed to promote Streaks, their excellent app for iPhone and Apple Watch. Streaks is a to-do list that helps you form good habits. The point is to motivate you to tackle the things you want to do: anything from daily exercise goals, learning a new language, taking your vitamins, or quitting a bad habit. Anything. I’ve brushed my teeth daily since I was a child but I’ve never been good about flossing — until, generally, a few days before a scheduled dental cleaning. I’ve been using Streaks lately to groove a daily flossing habit. (I expect a pat on the back the next time I’m at the dentist.)
Streaks first sponsored DF back in 2016 and everything I wrote about it then remains true today. It’s a brilliant design, both visually and conceptually. I’ve tried a few apps like this over the years — including a few new ones in recent years — and what kills most of them is friction. If it takes too many fiddly steps to mark off the things you do, you stop using the app. Streaks makes it incredibly simple and fast to mark things done. For anything activity-related, you don’t have to do anything at all — it just tracks information from HealthKit (with your permission, of course) automatically. And in terms of the visual design, Streaks is both highly distinctive and very iOS-y — it doesn’t look like a stock iOS app, but it very much looks and feels like a good native iOS app. That’s a combination that takes a great eye to pull off. (Unsurprisingly, Streaks won an Apple Design Award a few years ago, and has often been featured by Apple in the App Store.)
iOS has not been standing still over the last 8 years and neither has Zervaas. Streaks supports all the latest stuff you’d hope for in an iOS app, including interactive widgets. Streaks’s interactive widgets reduce even further the friction of marking things done — interactive widgets were practically made for apps like Streaks. Streaks also has a great Apple Watch companion app.
I only accept sponsorships for products or services that I’m proud to support. But Streaks is so good that I want to go out of my way to draw attention to it (again). I’m not praising it with superlatives because it’s my sponsor; I’m doing so because it’s superlatively good. It’s a one-time purchase, and the latest update has added seasonal themes, just in time for Christmas (and your New Year’s resolutions).
If you have any sort of interest in an app to help reinforce daily habits (or an interest in great UI design), go check Streaks out.
If you have young children, be sure to also try Little Streaks. It’s a great way to help kids focus on routines: meal time, bedtime, learning to ride a bike, brushing their teeth (and flossing!) — anything. Little Streaks is free for one routine, or use code “DARING” for 50% off the first year of a subscription for unlimited routines.
Cal Paterson:
Large language models (LLMs) like Chat-GPT and Claude.ai are whizzy and cool. A lot of people think that they are going to be The Future. Maybe they are — but that doesn’t mean that building them is going to be a profitable business.
In the 1960s, airlines were The Future. That is why old films have so many swish shots of airports in them. Airlines though, turned out to be an unavoidably rubbish business. I’ve flown on loads of airlines that have gone bust: Monarch, WOW Air, Thomas Cook, Flybmi, Zoom. And those are all busts from before coronavirus - times change but being an airline is always a bad idea.
That’s odd, because other businesses, even ones which seem really stupid, are much more profitable. Selling fizzy drinks is, surprisingly, an amazing business. Perhaps the best. Coca-Cola’s return on equity has rarely fallen below 30% in any given year. That seems very unfair because being an airline is hard work but making Coke is pretty easy. It’s even more galling because Coca-Cola don’t actually make the Coke themselves - that is outsourced to “bottling companies”. They literally just sell it.
This is such a crackerjack essay. Clear, concise, and uncomplicated. I find it hard to argue with. I’ve repeatedly mentioned an internal paper that leaked out of Google last year, titled “We Have No Moat, and Neither Does OpenAI”. The fact that OpenAI has lobbied for stringent AI regulation around the globe suggests that they fear this too — their encouragement of regulation could be explained by seeking a regulatory moat because there is no technical or business model moat to be had.
Paterson, expounding on his comparison to the airline industry, observes that commercial airlines have only two suppliers: Boeing and Airbus. He continues:
LLM makers sometimes imply that their suppliers are cloud companies like Amazon Web Services, Google Cloud, etc. That wouldn’t be so bad because you could shop around and make them compete to cut the huge cost of model training.
Really though, LLM makers have only one true supplier: NVIDIA. NVIDIA make the chips that all models are trained on — regardless of cloud vendor. And that gives NVIDIA colossal, near total pricing power. NVIDIA are more powerful relative to Anthropic or OpenAI than Airbus or Boeing could ever dream of being.
At this moment, there are three companies in the world with market caps in excess of $3 trillion: Apple, Nvidia, and Microsoft. There are only two more with market caps in excess of $2 trillion: Amazon and Google. Engineering, training, and providing LLMs isn’t the business with a moat. The business with a moat is making the cutting-edge computer hardware that trains LLMs, and that belongs to Nvidia.
I have more to say about Paterson’s essay, but I really just want you to read it for now.
Kind of wild that this entire sub-site is still standing on Apple.com, including working video. (Fingers crossed that my linking to it doesn’t bring it to the attention of someone who decides to 404 it.)
From Nathan Edwards’s 6/10 review of the M4 iMac for The Verge:
I also do not love that the stand has no height adjustment, and you can’t swap it for a more ergonomic option without buying an entirely different computer. Apple sells a version of the iMac with a VESA mount, but it doesn’t come with a stand at all, and most height-adjustable VESA mounts are not as pretty as the iMac. The Studio Display has a height-adjustable stand option, so we know Apple can make one it’s willing to put out into the world. It just hasn’t done so here. But whatever. I have hardcover books. It’s fine.
It wasn’t Edwards, but Nilay Patel, who reviewed the Studio Display for The Verge, but in that review the $1,600 cost — which called out the $400 surcharge for the optional adjustable stand — was one of the three bullet items under “The Bad”. So it’s not hard to guess that if the M4 iMac had an optional adjustable stand, it would still be listed a con, because surely that option, from Apple, would cost at least $300.
(I’ve used a Studio Display with the pricey options for nano-texture and adjustable height ever since it came out, and consider both options well worth the cost.)
But the weird thing about Edwards’s review is that the whole thing is predicated on his not seeing the appeal of an all-in-one computer. I feel the same way, personally. My primary computer is a MacBook Pro that I connect, lid-closed, to the aforementioned-in-parenthetical-aside Studio Display most of the time. If I were to buy a dedicated desktop Mac I’d get either a Mac Mini or Mac Studio and connect that to a Studio Display. But the iMac is obviously intended for people who want an all-in-one.
It makes for a very strange, dare I say pointless, review. It’s like a bicycle review from someone who admits that they only ever walk or drive a car and don’t see why anyone else doesn’t walk or drive everywhere. In theory, someone who doesn’t care for genre X can write a review of something from genre X, and their dislike of the genre might provide a unique perspective. (David Foster Wallace wrote a masterpiece of the genre with the title essay in A Supposedly Fun Thing I’ll Never Do Again regarding a weeklong Caribbean cruise.) But the review still needs to gauge the product accordingly, for what it is. Does anyone make a better all-in-one PC than the iMac? If so, who? If not, why is this a 6/10?
Holiday shopping bundle of 13 excellent Mac Apps, with two ways to buy. Get the whole bundle of 13 apps for $74 (a 76 percent discount from the combined regular prices), or, pick and choose a la carte and buy apps at 50 percent off.
Included in the promotion is Stairways Software’s astonishingly powerful and useful Keyboard Maestro, which almost never goes on sale. There are many longstanding Mac apps and utilities that I enjoy, appreciate, and recommend. There are very few that I can say I’d feel lost without. Keyboard Maestro is one of those.
Other apps in the Space/Time bundle that I use: TextSniper (instantly OCR any text you see on screen), DaisyDisk (disk space visualizer/cleanup), CleanShot X (advanced screenshot utility), and Bartender (menu bar item manager).
Fun interaction design treatise from George Cave.
Happy Thanksgiving, everyone.
Borderline incredible discount on AirPods Pro 2 at Amazon. This is just short of $100 off the retail list price of $249. (Buy through this link and I’ll get rich on the affiliate commission.)