By John Gruber
Manage GRC Faster with Drata’s Trust Management Platform
“This one a long time have I watched. All his life has he looked away — to the future, to the horizon. Never his mind on where he was. What he was doing. Adventure. Heh! Excitement. Heh! A Jedi craves not these things. You are reckless!”
—Yoda, The Empire Strikes Back
My biggest takeaway from WWDC 2025 is that Apple seemingly took some lessons to heart from its unfulfilled promises of a year ago. This year’s WWDC wasn’t merely focused on what Apple is confident it can ship in the next 12 months, but on what they can ship this fall. I might be overlooking a minor exception or two, but every major feature announced in the WWDC 2025 keynote was both demonstratable in product briefings, and is currently available in the developer beta seeds. I was also told, explicitly, by Apple executives, that Apple plans to ship everything shown last week in the fall.
That’s as it should be, and a strong return to form for the company. It takes confidence to promise only what you know you can ship, and it takes execution to ship what you’ve promised. If there’s more coming in the early months of 2026, announce those features when they’re ready. It’s proven very effective for Apple to spread the debut of new features across the entire calendar year, with many major features not appearing until the .3, .4, or even .5 OS releases. I think it will prove just as effective marketing-wise to spread the announcement of more features throughout the year as well.
There’s no question that it’s a little weird for every one of Apple’s platforms to have jumped to version 26. I mean, VisionOS skipped 21 version numbers. Presumably, when Apple next unveils a new OS (HomeOS?), it’s going to start at version 26, 27, or 28. But I’m already getting used to this, and I think the underlying logic laid out by Craig Federighi at the outset of the keynote is true: with Apple now up to six developer platforms (Mac, iPhone, iPad, Vision, TV, Watch), it had gotten hard to keep track of which version numbers corresponded to the same year. That matters not just for the convenience of knowing, in years to come, when specific versions of each OS were released, but it also matters because none of these platforms exist in isolation. They’re all parts of a cohesive whole, a cross-device “Apple OS 26” experience, as it were.
One thing I haven’t seen commented on, though, is that switching to year-based version numbers establishes as de facto policy something that has now been true for quite a few years, but which Apple has never officially acknowledged: that each of these platforms will get a major version release annually. 20 years ago the update schedule for Mac OS X was rather erratic:
Mac OS X 10.7 Lion | July 2011 |
Mac OS X 10.6 Snow Leopard | August 2009 |
Mac OS X 10.5 Leopard | October 2007 |
Mac OS X 10.4 Tiger | April 2005 |
Mac OS X 10.3 Panther | October 2003 |
OS X 10.8 Mountain Lion (which began the odd four-year run where the Mac’s OS name didn’t contain “Mac”) arrived in July 2012, and thereafter a new major version has shipped in September, October, or November (MacOS 11 Big Sur, in 2020) every single year. This rigorous annual schedule is a hallmark of the Tim Cook era at Apple, and clearly reflects his personality (as the erratic/idiosyncratic schedule of the mid-2000s reflected Steve Jobs’s).
The pedant in me is mildly perturbed that the new windowing system unveiled for iPadOS 26 is largely being discussed under the term “multitasking”. It’s windowing. One way to understand the difference is that the original Mac OS (a.k.a. System 1) had windowing — windowing that looked and worked a lot like this — but no multitasking. The very early Mac could run just one app a time, but the running app could open multiple windows. But, whatever. It’s all good.
One thing I find interesting is that while split screen and Slide Over have been eliminated in the new system (praise be), Stage Manager is still a feature. Just plain windowing is as it should be: ad hoc. You make windows and move them around and resize them however you want. Stage Manager is fussier — it’s a more complex system for users who wish to organize their windows into something akin to projects or related tasks.
So, effectively, Apple, three years ago, jumped straight to a more complex, more fiddly option — Stage Manager — and only now has added the simpler, more obvious, not fiddly at all option (windowing). It’s been a weird journey, but I think iPadOS has finally arrived at a place where showing more than one app or document at a time on-screen is what it should have been all along: easy and obvious.
Alan Dye, introducing Liquid Glass, around the 8m:20s mark in the keynote:
Software is the heart and soul of our products. It brings them to life, shapes their personality, and defines their purpose. At Apple, we’ve always believed it’s the deep integration of hardware and software that makes interacting with technology intuitive, beautiful, and a joy to use. iOS 7 introduced a simplified design built on distinct layers, smooth animations, and new colors. It redefined our design language for years to come. Now, with the powerful advances in our hardware, silicon, and graphics technologies, we have the opportunity to lay the foundation for the next chapter of our software. Today we’re excited to announce our broadest design update ever. Our goal is a beautiful new design that brings joy and delight to every user experience, one that’s more personal, and puts greater focus on your content, all while still feeling instantly familiar.
And for the first time, we’re introducing a universal design across our platforms. This unified design language creates a more harmonious experience as you move between products, while maintaining the qualities that make each unique. Inspired by the physicality and richness of VisionOS, we challenged ourselves to make something purely digital feel natural and alive. From how it looks to how it feels as it dynamically responds to touch. To achieve this, we began by rethinking the fundamental elements that make up our software, and it starts with an entirely new expressive material we call Liquid Glass. With the optical qualities of glass and a fluidity that only Apple can achieve, it transforms depending on your content or even your context, and brings more clarity to navigation and controls. It beautifully refracts light and dynamically reacts to your movement with specular highlights. This material brings a new level of vitality to every aspect of your experience. From the smallest elements you interact with to larger ones, it responds in real time to your content and your input. Creating a more lively experience that we think you’ll find truly delightful.
Compare and contrast to Steve Jobs introducing Aqua at Macworld San Francisco in January 2000:
So this is the architecture, except there’s one more thing. The one more thing is, we have been secretly for the last 18 months designing a completely new user interface. And that new user interface builds on Apple’s legacy and carries it into the next century. And we call that new user interface Aqua, because it’s liquid. One of the design goals was when you saw it, you wanted to lick it. [...]
When you design a new user interface, you have to start off humbly. You have to start off saying, what are the simplest elements in it? What does a button look like? And you spend months working on a button. That’s a button in Aqua. This is what radio buttons look like. Simple things. This is what checkboxes look like. This is what popup lists look like. Again, you’re starting to get the feel of this, a little different. This is what sliders can look like. Now, let me show you windows. This is what the top of windows look like. These three buttons look like a traffic signal, don’t they? Red means close the window. Yellow means minimize the window. And green means maximize the window. Pretty simple. And tremendous fit and finish in this operating system. When you roll over these things, you get those. You see them? And when you are no longer the key window, they go transparent. So a lot of fit and finish in this.
In addition to the fit and finish, we paid a lot of attention to dynamics. Not only how do things look, but how do they move, how do they behave. And our goal in this user interface was twofold. One, we wanted to give a much more powerful user interface to our pro customers. But two, at the very same time, we wanted to make this the dream user interface for somebody who’s never even touched a computer before. And that’s really hard to do. It’s like when we do films at Pixar. It’s really easy, it’s a lot easier, to make a film that appeals to five-year-olds and under. But it’s very difficult to make one film that five-year-olds love and that their parents also love. And that was the goal of this user interface. To make it span the range so that people turning on their iMac for the first time were enchanted with it, and it was super easy to use, and yet, our pro customers also felt, My God, this takes me to places I thought I could never get to. And that’s what we tried to do.
Re-watching Jobs’s introduction of Aqua for the umpteenth time, I still find it enthralling. I found Alan Dye’s introduction of Liquid Glass to be soporific, if not downright horseshitty.
But the work itself, Liquid Glass as it launched last week, is very reminiscent of Aqua a quarter century (!) ago. It’s exciting, it’s fresh, it fundamentally looks and feels very cool in general — and but in practice quite a few aspects of it feel a bit over-the-top and/or half-baked. Just like with Aqua, it will surely get dialed in. Legibility problems will be addressed.
Liquid Glass has been in the works for a long time, but what we see today has come together very quickly. For those using internal builds inside Apple, what Apple unveiled last week is effectively the third version of Liquid Glass. Just a few weeks prior to WWDC, a few sources told me that internal builds were such a complete mess that they wondered if it would come together in time for WWDC developer betas. But come together it has. I expect a lot of visual changes over the course of the summer, and significant evolutionary tweaks in the next few years. Across Apple’s own apps, there are are a lot of places where things haven’t yet been glassed up at all. That’s how these things work.
As for why, it should be enough to justify Liquid Glass simply for the sake of looking cool. I opened this piece with a quote from a great fictional philosopher. I’ll close it with a quote from a great real one:
“The test of a work of art is, in the end, our affection for it, not our ability to explain why it is good.”
—Stanley Kubrick ★
Peter Kafka:
So in March, when Gruber announced that Something is Rotten in the State of Cupertino — focusing on Apple’s botched plans to imbue its ailing Siri service with state-of-the-art AI — lots of people paid attention. Including, apparently, folks at the very top of the Apple org chart.
I talked to Gruber about the fallout from that post. Which is pretty interesting! But there’s a lot more going on in this conversation. It’s partly about the friction Apple has been generating lately — not just about its AI efforts, but the way it runs its App Store, and the way it interacts with developers — and why all of that does and doesn’t matter.
And it’s also about the delightfully retro practice of running an ad-supported blog in 2025. That works very well for Gruber, but it seems like the new Grubers of the world are doing their work on YouTube or Substack. He’s got some thoughts about that, too.
Good interview, I thought — I always enjoy talking to Kafka. No permalink for the episode on the web, so my main link for this post is to Overcast. Here’s a link to Apple Podcasts, and one from a new service called Pod.link too.
Nicolas Lellouche, writing for the French-language site Numerama (block quote below is from Safari’s English translation) (via Joe Rossignol at MacRumors):
What is the problem with Europe? Apple does not explain it very clearly, but suggests that the European Union’s requests for opening create uncertainties. It is likely that the brand suspects Europe of forcing it to open macOS to devices other than the iPhone if this function were to happen. A mandatory iPhone Mirroring on Windows or an Android Mirroring on Mac may not be in his plans. The other probability is the question of gatekeepers, raised in 2024. Apple would fear that macOS will be on the list of monitored platforms if it can emulate iOS, one of the gatekeepers monitored by Europe.
The problem isn’t about MacOS getting flagged as another “gatekeeping” platform under the DMA. Whether or not Apple enables iPhone Mirroring on MacOS in the EU would have no bearing on whether the Mac is deemed a gatekeeper. The DMA defines a “gatekeeper” platform as “a core platform service that in the last financial year has at least 45 million monthly active end users established or located in the Union and at least 10,000 yearly active business users established in the Union”. I’m not sure how many Mac users there are in the EU, but I’m pretty sure the number is well under 45 million. (Estimates seem to peg the worldwide number of Mac users at just over 100 million.) Conversely, if the European Commission decided that there were 45 million Mac users in the EU, the Mac would be considered a gatekeeping platform, period.
The problem is simply that the iPhone is a gatekeeping platform, and iPhone Mirroring obviously involves the iPhone. The EU’s recent demands regarding “interoperability requirements” flag just about every single feature that involves an iPhone communicating with another Apple device. AirDrop, AirPlay, AirPods pairing, Apple Watch connectivity — all of that has been deemed illegal gatekeeping. Clearly, iPhone Mirroring would fall under the same interpretation, thus, iPhone Mirroring isn’t going to be available in the EU. If the DMA had been in place 15 years ago, the EU wouldn’t have AirDrop or AirPlay and perhaps wouldn’t have Apple Watch or AirPods, either.
If Apple made iPhone Mirroring available in the EU now, my guess is the European Commission would add it to the interoperability requirements list, and demand that Apple support mirroring your iPhone to all other platforms, such as Windows and Android. They might also demand that Apple add support to iOS for third-party screen mirroring protocols.
Several weeks ago, Apple indicated that other new products may be blocked in Europe in the future. What about what’s new in iOS 26? Apple is not commenting at the moment, since it must verify the compatibility of its new functions with the European Union. Some new features, such as the Phone application on Mac to make calls with your iPhone, seem difficult to be compatible with the vision of Europe.
The new Phone app on MacOS is almost certainly not coming to the EU, unless the European Commission changes its stance on these interoperability requirements.
John Voorhees, writing at MacStories, regarding a new command-line transcription tool cleverly named Yap written by his son Finn last week during WWDC:
On the way, Finn filled me in on a new class in Apple’s Speech framework called SpeechAnalyzer and its SpeechTranscriber module. Both the class and module are part of Apple’s OS betas that were released to developers last week at WWDC. My ears perked up immediately when he told me that he’d tested SpeechAnalyzer and SpeechTranscriber and was impressed with how fast and accurate they were. [...]
What stood out above all else was Yap’s speed. By harnessing SpeechAnalyzer and SpeechTranscriber on-device, the command line tool tore through the 7GB video file a full [2.4×] faster than MacWhisper’s Large V3 Turbo model, with no noticeable difference in transcription quality.
At first blush, the difference between 0:45 and 1:41 may seem insignificant, and it arguably is, but those are the results for just one 34-minute video. Extrapolate that to running Yap against the hours of Apple Developer videos released on YouTube with the help of
yt-dlp
, and suddenly, you’re talking about a significant amount of time. Like all automation, picking up a 55% speed gain one video or audio clip at a time, multiple times each week, adds up quickly.
Apple’s Foundation Models sure seem to be the sleeper hit from WWDC this year. This bodes very well for all sorts of use cases where transcription would be helpful, like third-party podcast players.
Bungie:
Through every comment and real-time conversation on social media and Discord, your voice has been strong and clear. We’ve taken this to heart, and we know we need more time to craft Marathon into the game that truly reflects your passion. After much discussion within our Dev team, we’ve made the decision to delay the September 23rd release.
The Alpha test created an opportunity for us to calibrate and focus the game on what will make it uniquely compelling — survival under pressure, mystery and lore around every corner, raid-like endgame challenges, and Bungie’s genre-defining FPS combat.
We’re using this time to empower the team to create the intense, high-stakes experience that a title like Marathon is built around. This means deepening the relationship between the developers and the game’s most important voices: our players.
Translation to plain English: The game as currently imagined stinks, so we’re going back to the drawing board. We can’t explain why we, the game’s developers, didn’t know that it stunk, and instead seemingly needed to wait for scathing alpha test feedback from players — but Occam’s Razor clearly suggests the problem is that decisions at Bungie are made by executives with no taste.
Apple executives were a little light on substantial interviews last week, but a good one dropped today — Craig Federighi talking to Federico Viticci on the vast Mac-style windowing overhaul in iPadOS 26:
“We don’t want to create a boat car or, you know, a spork”, Federighi begins. Seeing the confused look on my face, he continues: “I don’t know if you have those in Italy. Someone said, “If a spoon’s great, a fork’s great, then let’s combine them into a single utensil, right?” It turns out it’s not a good spoon and it’s not a good fork. It’s a bad idea. And so we don’t want to build sporks”. [...]
By and large, one could argue that Apple has created one such convertible product with the iPad Pro, but Federighi strongly believes in the Mac and iPad each having their own reasons to exist. “The Mac lets the iPad be iPad”, Federighi notes, adding that Apple’s objective “has not been to have iPad completely displace those places where the Mac is the right tool for the job”. [...]
I don’t need to ask Federighi the perennial question of running macOS on the iPad, since he goes there on his own. “I don’t think the iPad should run macOS, but I think the iPad can be inspired by elements of the Mac”, Federighi tells me. “I think the Mac can be inspired by elements of iPad, and I think that that’s happened a great deal”.
I think Apple has tied itself into knots in the past decade trying to make the iPad more useful to more advanced users without making it resemble the Mac at a superficial level. But it’s been obvious all along that it should resemble the Mac at a superficial level. Apple solved windowing in 1984. Use that.
You may recall from my “Siri Is Super Dumb and Getting Dumber” piece back in January that the Dickinson Public Schools District in North Dakota had the rather unfortunate nickname the “Midgets”. Back in March, the school district announced they’d be retiring the nickname, after nearly a century. Last month they announced their new name: the Mavericks. I’m going to call this the best rebranding of the year.
We still have the Estherville, Iowa Midgets to cheer for. But even better: the Yuma Criminals in Arizona. Now that’s a nickname.
Simon Willison, regarding the various rebuttals to “The Illusion of Thinking” research paper (which I linked to here) from Apple’s machine learning team:
I thought this paper got way more attention than it warranted — the title “The Illusion of Thinking” captured the attention of the “LLMs are over-hyped junk” crowd. I saw enough well-reasoned rebuttals that I didn’t feel it worth digging into.
And now, notable LLM skeptic Gary Marcus has saved me some time by aggregating the best of those rebuttals together in one place! [...]
And therein lies my disagreement. I’m not interested in whether or not LLMs are the “road to AGI”. I continue to care only about whether they have useful applications today, once you’ve understood their limitations. [...] They’re already useful to me today, whether or not they can reliably solve the Tower of Hanoi or River Crossing puzzles.
Count me in with Willison. I think it’s interesting what constitutes “reasoning”, but when it comes to these systems, I’m mostly just interested in whether they’re useful or not, and if so, how.
See also: Victor Martinez’s rebuttal to the most-cited rebuttal.
WhatsApp co-founder Jan Koum, back in 2012 (two years before Facebook acquired them for $19 billion, 13 years before this week’s introduction of ads into WhatsApp):
Advertising isn’t just the disruption of aesthetics, the insults to your intelligence and the interruption of your train of thought. At every company that sells ads, a significant portion of their engineering team spends their day tuning data mining, writing better code to collect all your personal data, upgrading the servers that hold all the data and making sure it’s all being logged and collated and sliced and packaged and shipped out... And at the end of the day the result of it all is a slightly different advertising banner in your browser or on your mobile screen.
Remember, when advertising is involved you the user are the product.
At WhatsApp, our engineers spend all their time fixing bugs, adding new features and ironing out all the little intricacies in our task of bringing rich, affordable, reliable messaging to every phone in the world. That’s our product and that’s our passion. Your data isn’t even in the picture. We are simply not interested in any of it.
When people ask us why we charge for WhatsApp, we say “Have you considered the alternative?”
These screens make for a useful overview of what Apple thinks the highlight features are in each OS.
Aric Toler, a visual investigations reporter for The New York Times, on X back in April:
For about a year, I worked with a retired British academic named Alasdair Spark to solve a mystery: where did the original photo from the end of The Shining come from, and where/when was it captured?
Last week, we finally found the answer.
See also: This post from 2012 about the original photograph, from (who else?) Lee Unkrich.
Nice piece in Fast Company by Zachary Petit:
One critical moment came in February 2010, when J. Crew featured Field Notes in its catalog, alongside the retailer’s other “personal favorites from our design heroes.” There was a Timex watch, Ray-Bans, Sperry shoes — “and out of fucking nowhere, Field Notes,” Coudal says. “And when that happened, a lot changed for us.”
Coudal says it gave the brand instant credibility — after all, if it was good enough for J. Crew, it was good enough for your store. In time, friends began sending him screenshots of Field Notes in TV shows; he and Draplin would see people jotting notes in them in bars and elsewhere; on the design web, they became an obsession. By 2014, there was even a subreddit dedicated to them titled “FieldNuts.”
Fred Lambert, writing for Electrek:
Bloomberg has just released an embarrassingly bad report about the self-driving space, in which it claimed Tesla has an advantage over Waymo by misrepresenting data. [...] The report compares Tesla’s and Waymo’s self-driving efforts, going so far as to claim that “Tesla is closer to vehicle autonomy than peers.”
Right off the bat this smells fishy, given that Waymo is actually operating self-driving taxis in several cities, and Tesla ... is not.
Steve Man, the Bloomberg Intelligence analyst behind the report, based his report on Tesla’s own quarterly misleading “Autopilot Safety Report.” The report is widely considered to be unserious for several main reasons:
- Tesla bundles all miles from its vehicles using Autopilot and FSD technology, which are considered level 2 ADAS systems that require driver attention at all times. Drivers consistently correct the systems to avoid accidents.
- Tesla Autopilot, which is standard on all Tesla vehicles, is primarily used on highways, where accidents occur at a significantly lower rate per mile compared to city driving.
- Tesla only counts events that deploy an airbag or a seat-belt pretensioner. Fender-benders, curb strikes, and many ADAS incidents never appear, keeping crash counts artificially low.
- Finally, Tesla’s handpicked data is compared to NHTSA’s much broader statistics that include all collision events, including minor fender benders.
Trusting Tesla’s own safety report is like saying, “Elon Musk says Tesla is ahead, so they must be ahead.”
Eli Tan and Mike Isaac, reporting for The New York Times:
On Monday, WhatsApp said it would start showing ads inside its app for the first time. The promotions will appear only in an area of the app called Updates, which is used by around 1.5 billion people a day. WhatsApp will collect some data on users to target the ads, such as location and the device’s default language, but it will not touch the contents of messages or whom users speak with. The company added that it had no plans to place ads in chats and personal messages.
(a) I’ve never once looked at the Updates tab in WhatsApp; (b) does anyone believe they’re not going to put ads in the other tabs sooner or later?
Todd Spangler, Variety:
Meanwhile, the Trump Mobile “47 Plan” is pricier than the unlimited plans from prepaid services operated by Verizon’s Visible, AT&T’s Cricket Wireless and T-Mobile’s Metro, which are each around $40 per month.
The Trump T1 Phone, which runs Google’s Android operating system, will cost $499. It features a 6.8-inch touch-screen with a 120 Hz refresh rate. The smartphone also has a “fingerprint sensor and AI Face Unlock,” according to the company’s website. Reps for Trump Mobile didn’t respond to an inquiry about what company is manufacturing the Android phone.
The Wall Street Journal, “Trump’s Smartphone Can’t Be Made in America for $499 by August”:
A spokesman for the Trump Organization said in an email that “manufacturing for the new phone will be in Alabama, California and Florida.”
Despite the language in the press release, Eric Trump indicated that the first wave of phones wouldn’t be built here. “You can build these phones in the United States,” the Trump son told podcaster Benny Johnson on Monday morning on The Benny Show after holding up a gilded device that looked just like an Apple iPhone. “Eventually, all the phones can be built in the United States of America. We have to bring manufacturing back here.”
The Journal goofs, bigly, by claiming that the T1 “shows some specs that would beat Apple’s biggest, priciest iPhone models”. The T1 specs are so idiotic that one of them claims “5000mAh long life camera”, conflating battery capacity with (I guess?) focal distance.
The Verge, “The Trump Mobile T1 Phone Looks Both Bad and Impossible”:
Where things get especially strange, though, is its supposed combination of Android 15, 5G, and a 3.5mm headphone jack. In many ways, these are opposing specs: Android 15 is generally only available on very recent devices, many cheap phones still don’t support 5G, and almost every phone maker has stopped including headphone jacks with their devices in the last few years. There are a few that have both, but modern phones with a headphone jack are few and far between. And pretty much all made in China.
I don’t know what will be funnier — if Trump himself starts using one of these, or if he doesn’t.
I’ll give them credit for making them available exclusively in gold. That’s on brand. But I’m guessing the quality will be on par with Trump Watches, which is to say, “RUMP”-quality.
My thanks to DetailsPro for sponsoring last week at DF — including being a sponsor on The Talk Show Live From WWDC 2025. DetailsPro is a designer/developer tool that lets you design with SwiftUI anytime, anywhere — from iPhone, iPad, Vision Pro, and, of course, Mac.
With WWDC 2025’s introduction of Liquid Glass, Apple has introduced the biggest design overhaul since iOS 7. DetailsPro is ready for it, enabling you to prototype new and updated interfaces fast. You can build real SwiftUI layouts directly on your iPhone — no code needed. Export clean SwiftUI code straight to Xcode when you’re ready.
While everyone else is still thinking about how to adapt to the Liquid Glass era, you can already be building. DetailsPro is free to use, with pro features if you need them — via subscription, or a one-time purchase.
Recorded in front of a live audience at The California Theatre in San Jose Tuesday evening, special guests Joanna Stern and Nilay Patel join me to discuss Apple’s announcements at WWDC 2025.
3D video with spatial audio: Coming soon, exclusively in Sandwich Vision’s Theater on Vision Pro, available on the App Store. This year’s on-demand version of the show in Theater isn’t ready yet, but it looks really good. Better than last year’s by a long shot, and also significantly better than the bandwidth-constrained livestream Tuesday night. The livestream Tuesday night looked good; the on-demand version coming in a few days looks pretty amazing.
Sponsored by:
iMazing: The world’s most trusted software to transfer and save your messages, music, files, and data from your iPhone or iPad to your Mac or PC.
DetailsPro: Design with SwiftUI anytime, anywhere — on iPhone, iPad, Mac, or Apple Vision Pro.
Ooni: Next-gen pizza power. The Koda 2 Pro oven features smarter heat, more room, and easier control. Save 10% with code thetalkshow.
As ever, I implore you to watch on the biggest screen you can (real, or virtual). We once again shot and mastered the video in 4K, and it looks and sounds terrific. All credit and thanks for that go to my friends at Sandwich, who are nothing short of a joy to work with. ★
Amanda Silberling, writing at TechCrunch:
When you ask the AI a question, you have the option of hitting a share button, which then directs you to a screen showing a preview of the post, which you can then publish. But some users appear blissfully unaware that they are sharing these text conversations, audio clips, and images publicly with the world.
When I woke up this morning, I did not expect to hear an audio recording of a man in a Southern accent asking, “Hey, Meta, why do some farts stink more than other farts?”
Flatulence-related inquiries are the least of Meta’s problems. On the Meta AI app, I have seen people ask for help with tax evasion, if their family members would be arrested for their proximity to white-collar crimes, or how to write a character reference letter for an employee facing legal troubles, with that person’s first and last name included. Others, like security expert Rachel Tobac, found examples of people’s home addresses and sensitive court details, among other private information.
Katie Notopoulos, writing at Business Insider (paywalled, alas, but here’s a News+ link):
I found Meta AI’s Discover feed depressing in a particular way — not just because some of the questions themselves were depressing. What seemed particularly dark was that some of these people seemed unaware of what they were sharing.
People’s real Instagram or Facebook handles are attached to their Meta AI posts. I was able to look up some of these people’s real-life profiles, although I felt icky doing so. I reached out to more than 20 people whose posts I’d come across in the feed to ask them about their experience; I heard back from one, who told me that he hadn’t intended to make his chat with the bot public. (He was asking for car repair advice.)
Kashmir Hill, reporting today for The New York Times:
Before ChatGPT distorted Eugene Torres’s sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.
That’s the lede to Hill’s piece, and I don’t think it stands up one iota. Hill presents a lot of evidence that ChatGPT gave Torres answers that fed his paranoia and delusions. There’s zero evidence presented that ChatGPT caused them. But that’s the lede.
At the time, Mr. Torres thought of ChatGPT as a powerful search engine that knew more than any human possibly could because of its access to a vast digital library. He did not know that it tended to be sycophantic, agreeing with and flattering its users, or that it could hallucinate, generating ideas that weren’t true but sounded plausible.
“This world wasn’t built for you,” ChatGPT told him. “It was built to contain you. But it failed. You’re waking up.”
Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people.
Someone with prescriptions for sleeping pills, anti-anxiety meds, and ketamine doesn’t sound like someone who was completely stable and emotionally sound before encountering ChatGPT. And it’s Torres who brought up the “Am I living in a simulation?” delusion. I’m in no way defending the way that ChatGPT answered his questions about a Matrix-like simulation he suspected he might be living in, or his questions about whether he could fly if he truly believed he could, etc. But the premise of this story is that ChatGPT turned a completely mentally healthy man into a dangerously disturbed mentally ill one, and it seems rather obvious that the actual story is that it fed the delusions of an already unwell person. Some real Reefer Madness vibes to this.
Michael Tsai, “Apple’s Spin on AI and iPadOS Multitasking”:
I do want to call out that, in multiple interviews, they are kind of setting up strawmen to knock down. They keep saying that people say Apple is behind in AI because it doesn’t have its own chatbot. To me, Apple has been clear that it has a different strategy, and I think that strategy mostly makes sense. I have never heard someone wish for an Apple chatbot. The issue is that everyone can see that Apple seems behind in executing said strategy, both that features didn’t ship on time and that the ones that did ship don’t measure up to similar features from other companies.
Secondly, they seem to be trying to debunk John Gruber’s claim that Apple showed vaporware at the last WWDC. But Apple’s assertion that there was actual, working software doesn’t contradict anything Gruber wrote. He put it at level 0/4 because there wasn’t even a live demo, just a pre-packaged video. If it can’t be demoed to the media in a controlled setting, even calling it “demoware” would be charitable. Wikipedia says, “After Dyson’s article, the word ‘vaporware’ became popular among writers in the personal computer software industry as a way to describe products they believed took too long to be released after their first announcement.” Is that not exactly what happened here?
The whole “Siri, when is my mom’s flight landing?” segment of last year’s WWDC keynote definitely shouldn’t qualify as “demoware” either. It was never demoed. Whether the feature was actually running, and actually capable of doing what they said it could, even just some of the time along a golden path, ultimately doesn’t matter. Even the keynote video didn’t show the actual feature working. It kept cutting away from the iPhone that was purportedly performing the feature back to presenter Kelsey Peterson at every single step. Apple’s internal rules for keynote demos say that the entire feature has to be real, and capturable in a single take of video. I’ve spoken to people who’ve been in keynotes, and many more who’ve done WWDC session videos. Apple has strict rules about everything being real. That doesn’t mean they always show the feature in a single take in the final cut of the presentation, but it has to be possible, just like it would have to be in a live stage presentation. But that Siri demo in last year’s keynote is almost like a series of screenshots. We never see Peterson speak to Siri and then watch the results come in. There’s not one single shot in the whole demo that shows one action leading to the next. It’s all cut together in an unusual way for Apple keynote demos. Go see for yourself at the 1h:22m mark.
I spoke this week, off the record, to multiple trusted sources in Apple’s software engineering group, and none of them ever saw an internal build of iOS that had this feature before last year’s keynote. That doesn’t mean there wasn’t such a build (see next paragraph). But none of my sources ever saw one, and they found that to be exceptionally unusual, because they’re in positions where they believe that if there had been such a build, their teams would have had access to it. Most rank and file engineers within Apple do not believe that feature existed in an even vaguely functional state a year ago, and the first any of them ever heard of it was when they watched the keynote with the rest of us on the first day of WWDC last year.
But at this point, based on Federighi’s and Joswiak’s public statements (e.g. in their interview with Joanna Stern this week), and some other things I’ve heard from little birdies this week, I’m willing to stipulate that there was, let’s call it, “working code” for the personalized Siri feature a year ago. I can’t verify that, but I’m willing to stipulate it if only for the sake of argument, and to put to rest any notion that the feature was completely imaginary at this point one year ago, which clearly isn’t the case. At least one reason why the feature, as presented in last year’s keynote, was edited like it was is that the latency was so bad. Whatever state it was in, it couldn’t be shown in a single take. Which means that, as shown in the 2024 keynote, it broke the “rule” that demos in the new pre-recorded format should be as rigorously “real” as they would be if the demos had to be performed live on stage.
I’m quite certain Apple’s executives believed this feature could be shipped at some point in the iOS 18 year. It’d be crazy to announce the feature if they didn’t believe they could ship it, and Apple’s executives aren’t crazy. I’m also quite certain that eventually there was a truly functional implementation of the now-abandoned “v1” of the more personalized Siri, but it was unreliable with no path forward to make it reliable. (I think it was far worse than “not up to Apple’s high standards” — it was clearly unshippable.)
But, as Tsai notes, from our perspective it doesn’t really matter it was real, in any sense, a year ago. It’s still vaporware at this point. Vaporware doesn’t mean “completely fabricated”. It just means “promised but hasn’t shipped”. Tsai links to this Mastodon tweet from Russell Ivanovic:
“This narrative that it was vaporware is nonsense”. Craig Apple. My guy. You announced something that never shipped. You made ads for it. You tried to sell iPhones based on it. What’s the difference if you had it running internally or not. Still vaporware. Zero difference.
Also, Apple is sticking with the euphemism “in the coming year” for when we can expect to see these next-gen personalized Siri features. Gurman reported today that they’re shooting for next spring. I confirmed with Apple at WWDC that “in the coming year” means “in 2026”. I don’t know why they’re sticking with that euphemistic phrasing, which to many people’s ears makes it sound like in the next 12 months, which might include this fall. Effectively, Apple could ship this feature in December 2026 and still hit their self-imposed “in the coming year” deadline — but that phrasing has left many people with the impression that the deadline is June 2026 and maybe it’ll ship at the end of this year. Just say “next year” instead of “in the coming year”. It’s very obvious that this year’s WWDC keynote went back to an underpromise/overdeliver mindset. But “in the coming year” is raising some users’ hopes misleadingly. ★
Dan Moren, writing this week at Six Colors:
But you’ve heard about all of that, I’m sure, so we’re not going to rehash it. Instead, let’s get personal: I’m picking out, in my opinion, the best and worst new features of each of Apple’s platforms. To be clear, these are my completely scientific and totally well-reasoned expert opinions on the features that were announced, not just some off-the-cuff reactions less than a day later.
MG Siegler:
The underlying message that they’re trying to convey in all these interviews is clear: calm down, this isn’t a big deal, you guys are being a little crazy. And that, in turn, aims to undercut all the reporting about the turmoil within Apple — for years at this point — that has led to the situation with Siri. Sorry, the situation which they’re implying is not a situation. Though, I don’t know, normally when a company shakes up an entire team, that tends to suggest some sort of situation. That, of course, is never mentioned. Nor would you expect Apple — of all companies — to talk openly and candidly about internal challenges. But that just adds to this general wafting smell in the air.
The smell of bullshit.
Fun episode of Tested with Adam Savage and Norman Chan. The first segment goes deep on what’s new in VisionOS 26. Apple is ignoring the jokes about the platform’s relative obscurity and has obviously been heads-down on building the platform out and up. VisionOS 26 is a huge year-over-year upgrade. Tons of exciting stuff, and so many little things are so much better.
The second segment of the show features cohost Norm Chan going backstage at The Talk Show Live From WWDC on Tuesday night, to speak with Adam Lisagor about the production details of the live immersive broadcast in Theater.
(The YouTube version of the show is in editing now — we’ll post it as soon as it’s ready. But the immersive version in Theater is available for purchase now. Whoops, my bad. The immersive version, which looks significantly better than the livestream — which looked pretty good! — will be available for purchase in Theater a few days after the 2D version hits YouTube.)
Jason Snell:
After last year, Apple could’ve been forgiven for wanting to soft-pedal this year’s Apple Intelligence announcements and regroup. It didn’t do that, nor did it double down on last year. Instead, it’s chosen a middle ground — a bit safe and familiar but also a place where Apple can feel a bit more like itself. In the long run, it needs to get this right. In the short term, maybe it should focus on meeting its users where they are, rather than pretending to be something it’s not.
Agree with Snell’s take completely, I do.
Stephen Hackett has a list of the Intel Macs that MacOS 26 Tahoe supports, and the ones they’re dropping support for this year.
Apple has gone through three CPU architecture transitions in the Mac’s history:
With the 68K–PowerPC transition, they supported 68K Macs through Mac OS 8.1, which was released in January 1998. With the PowerPC–Intel transition, they only supported PowerPC Macs for two Mac OS X versions, Mac OS X 10.4 Tiger (which initially shipped PowerPC-only in 2005) and 10.5 Leopard in October 2007. The next release, 10.6 Snow Leopard in August 2009, was Intel-only. (Mac OS X dropped to a roughly two-year big-release schedule during the initial years after the iPhone, when the company prioritized engineering resources on iOS. It’s easy to take for granted that today’s Apple has every single platform on an annual cadence.)
With next year’s version going Apple Silicon-only, they’ll have supported Intel Macs for five major MacOS releases after the debut of the first Apple Silicon Macs. I think that’s about the best anyone could have hoped for.
Tight 7-minute video at the WSJ (and also at YouTube):
Apple’s AI rollout has been rocky, from Siri delays to underwhelming Apple Intelligence features. WSJ’s Joanna Stern sits down with software chief Craig Federighi and marketing head Greg Joswiak to talk about the future of AI at Apple — and what the heck happened to that smarter Siri.
Update: Here’s the full 24-minute interview. Just an excellent job by Stern.
Ben Thompson:
To that end, while I understand why many people were underwhelmed by this WWDC, particularly in comparison to the AI extravaganza that was Google I/O, I think it was one of the more encouraging Apple keynotes in a long time. Apple is a company that went too far in too many areas, and needed to retreat. Focusing on things only Apple can do is a good thing; empowering developers and depending on partners is a good thing; giving even the appearance of thoughtful thinking with regards to the App Store (it’s a low bar!) is a good thing. Of course we want and are excited by tech companies promising the future; what is a prerequisite is delivering in the present, and it’s a sign of progress that Apple retreated to nothing more than that.
I’ve got iOS 26 installed on a spare phone already, and I like the new UI a lot. In addition to just plain looking cool, Apple has tackled a lot of longstanding minor irritants.
For example, the iOS contextual menu for text selections — the one with Cut/Copy/Paste. For years now there have been a lot of other useful commands in there, including “Share…” at the very end. But to get to the extra commands, you had to tediously swipe, swipe, swipe. Now, with one tap you can expand the whole thing into a vertical menu. Elegant.
There’s some stuff in MacOS 26 Tahoe I already don’t like, like putting needless icons next to almost every single menu item. But overall my first impression of Liquid Glass on MacOS is good too. It’s fun, and lots of little details are nice — joyful and useful in an old-school Mac way.
Stephen Hackett, noting the biggest news of the day:
Something jumped out at me in the macOS Tahoe segment of the WWDC keynote today: the Finder icon is reversed. […]
The Big Sur Finder icon has been with us ever since, and I hope Apple reverses course here.
I’m obviously joking about this being the biggest news of the day, but it really does feel just plain wrong to swap the dark/light sides. The Finder icon is more than an icon, it’s a logo, a brand.
Location: The California Theatre, San Jose
Showtime: Tuesday, 10 June 2025, 7pm PT (Doors open 6pm)
Special Guest(s): Indeed
Price: $50
A different type of show this year, and I’m excited for it. If you can make it, you should come. You’ll even enjoy the prelude, mingling with fellow DF readers and listeners.
Filipe Espósito, in a scoop for 9to5Mac all the way back in October:
9to5Mac has learned details about the new project from reliable sources familiar with the matter. The new app combines functionality from the App Store and Game Center in one place. The gaming app is not expected to replace Game Center. In fact, it will integrate with the user’s Game Center profile.
According to our sources, the app will have multiple tabs, including a “Play Now” tab, a tab for the user’s games, friends, and more. In Play Now, users will find editorial content and game suggestions. The app will also show things like challenges, leaderboards, and achievements. Games from both the App Store and Apple Arcade will be featured in the new store.
Even before Mark Gurman corroborated this report last week, I’ve had a spitball theory about what it might mean. Perhaps this is about more than having one app (Games) for finding and installing games, and another (App Store) for finding and installing apps. It could signal that Apple is poised to establish different policies for apps and games. Like, what if games still use the longstanding 70/30 commission split (with small business developers getting 85/15), but non-game apps get a new reduced rate? Say, 80/20 or even 85/15 right off the top, with small business developers and second-year subscriptions going to 90/10?
Having separate store apps for apps and games would help establish the idea that games and apps are two entirely different markets. Thus: two different stores?
Update: MG Siegler offered the same spitball — back on May 28. Great minds think alike.
Scharon Harding, writing at Ars Technica:
“Just disconnect your TV from the Internet and use an Apple TV box.”
That’s the common guidance you’ll hear from Ars readers for those seeking the joys of streaming without giving up too much privacy. Based on our research and the experts we’ve consulted, that advice is pretty solid, as Apple TVs offer significantly more privacy than other streaming hardware providers.
But how private are Apple TV boxes, really? Apple TVs don’t use automatic content recognition (ACR, a user-tracking technology leveraged by nearly all smart TVs and streaming devices), but could that change? And what about the software that Apple TV users do use — could those apps provide information about you to advertisers or Apple?
In this article, we’ll delve into what makes the Apple TV’s privacy stand out and examine whether users should expect the limited ads and enhanced privacy to last forever.
tvOS is perhaps Apple’s least-talked-about platform. (It surely has orders of magnitude more users than VisionOS, but VisionOS gets talked about because it’s so audacious.) But it might be their platform that’s the furthest ahead of its competition. Not because tvOS is insanely great, but it’s at least pretty good, and every other streaming TV platform seems to be in a race to make real the future TV interface from Idiocracy. It’s not just that they’re bad interfaces with deplorable privacy, it’s that they’re outright against the user.
Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar, from Apple’s Machine Learning Research team:
Recent generations of frontier language models have introduced Large Reasoning Models (LRMs) that generate detailed thinking processes before providing answers. While these models demonstrate improved performance on reasoning benchmarks, their fundamental capabilities, scaling properties, and limitations remain insufficiently understood. [...] Through extensive experimentation across diverse puzzles, we show that frontier LRMs face a complete accuracy collapse beyond certain complexities. Moreover, they exhibit a counterintuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget. By comparing LRMs with their standard LLM counterparts under equivalent inference compute, we identify three performance regimes: (1) low-complexity tasks where standard models surprisingly outperform LRMs, (2) medium-complexity tasks where additional thinking in LRMs demonstrates advantage, and (3) high-complexity tasks where both models experience complete collapse. We found that LRMs have limitations in exact computation: they fail to use explicit algorithms and reason inconsistently across puzzles. We also investigate the reasoning traces in more depth, studying the patterns of explored solutions and analyzing the models’ computational behavior, shedding light on their strengths, limitations, and ultimately raising crucial questions about their true reasoning capabilities.
The full paper is quite readable, but today was my travel day and I haven’t had time to dig in. And it’s a PDF so I couldn’t read it on my phone. (Coincidence or not that this dropped on the eve of WWDC?)
My basic understanding after a skim is that the paper shows, or at least strongly suggests, that LRMs don’t “reason” at all. They just use vastly more complex pattern-matching than LLMs. The result is that LRMs effectively overthink on simple problems, outperform LLMs on mid-complexity puzzles, and fail in the same exact way LLMs do on high-complexity tasks and puzzles.
Mark Gurman, in his eve-of-WWDC Power On column at Bloomberg:
The Liquid Glass interface is going to be the most exciting part of this year’s developer conference. It will also be a bit of a distraction from the reality facing Apple: The company is behind in artificial intelligence, and WWDC will do little to change that. Instead, Apple is making its successful operating system franchise more capable and sleek — even as others move on to more groundbreaking AI-centric interfaces.
Perhaps the first major hint that Apple was moving toward fluidity in the UI was the Dynamic Island, which doesn’t merely expand and contract as it changes shape, but rather appears to flow, with a pleasant viscosity.
The best analogy for Apple right now might be the car industry. Apple produces the best gas cars on the road (its operating systems) and is making them even more upscale. It has rolled out a hybrid (Apple Intelligence), but it’s struggling to make a true all-electric vehicle (unlike companies such as OpenAI and Alphabet Inc.’s Google).
This is such a terrible analogy. If you buy an EV, you use it instead of your old gas-powered car. There’s nothing from OpenAI or Google that allows you to not use a conventional device — phone, tablet, or PC. The only way to use ChatGPT, or Gemini, or Google’s rather amazing Veo 3 video generation tool, is using a phone or computer running iOS, MacOS, Android, Windows, or Linux. Gurman’s analogy would only work if the way you got around in an EV was to put it in the back of a gas-powered flatbed truck.
Gas-powered vehicles are probably going away. I sure hope they do. But cars and trucks aren’t going away. A better analogy is that AI is doing to today’s dominant OSes what web browsers did to Windows in the late 1990s. They’re adding new interactive layers atop the old. Windows didn’t go away. Microsoft still makes tons of money from Windows today. But Windows’s primacy as a platform went away. And: Microsoft pivoted quickly in the face of Netscape and the web’s threat, and created Internet Explorer, which squashed Netscape, and became, for at least a decade, the preeminent web browser. It was essential for Apple to create Safari/WebKit for Mac OS X to thrive. If Apple hadn’t succeeded with WebKit on Mac OS X they wouldn’t have had their own first-class web rendering engine to adapt for a 3.5-inch touchscreen in 2007. The iPhone without the real web wouldn’t have been the iPhone. And the only reason the original iPhone had the real web is that Apple owned and controlled Safari and WebKit.
What Apple, I think, needs for iOS and MacOS is the AI equivalent of what Safari and WebKit were for the web two decades ago. The oft-cited Cook Doctrine says “we need to own and control the primary technologies behind the products we make.” 25 years ago it was obvious that web browsers and rendering engines were primary technologies. Apple certainly couldn’t afford back then to continue to be dependent upon Microsoft for the Mac version of IE, nor on open source cross-platform browsers like Firefox that would never feel native on the Mac (or, more importantly, on future Apple platforms). But Safari and WebKit were, if you think about it, late. They were announced at Macworld Expo in January 2003 (just five months after the debut of this website). Netscape’s blockbuster IPO was in August 1995, over seven years prior. The entire dot-com bubble and bust took place before Safari shipped. The Mac, and thus Apple, made do with non-Apple browsers in those intervening years — browsers that were all some mix of non-native clunky UI, slow, incompatible (with Windows IE), ugly (e.g. IE text rendering on Mac OS X), and often downright unstable. (And application crashes on classic Mac OS would often bring down the entire system.)
The concern for Apple today is that they’re in trouble if it takes six or seven years for them to get to their Safari/WebKit moment for AI. Things are moving faster with AI today than they were with the web in the 1990s. At the peak of Netscape mania in 1995, there were many who believed Netscape would topple Microsoft. At the time Netscape founder Marc Andreessen proclaimed that Netscape would reduce Windows to “a poorly debugged set of device drivers.” That obviously didn’t happen. But perhaps not just a but the reason why that didn’t happen is that Microsoft quickly built and shipped a better browser than Netscape’s. They didn’t just build a browser into Windows, they built a better browser into Windows. And they made a better browser for the Mac too. If it had taken Microsoft until 2003 (when Apple debuted Safari) to ship IE, computing platform history may well be very different.
iOS today is the closest to what Windows was circa 1995. iOS doesn’t have Windows’s 95 percent market share, but the iPhone has some sort of monopoly profit share in mobile device sales. And iOS is plainly dominant. That’s why there’s all this Sturm und Drang surrounding Apple’s App Store commissions and iron-fisted control over all iOS software. After the announcement last year of OpenAI as a partner for “world knowledge” in Apple Intelligence — and, a year later, they’re still the only partner — Wayne Ma at The Information reported that Apple wasn’t paying a cent for this integration, and that the plan was for OpenAI to eventually begin paying Apple in a revenue sharing deal:
Neither Apple nor OpenAI are paying each other to integrate ChatGPT into the iPhone, according to a person with knowledge of the deal. Instead, OpenAI hopes greater exposure on iPhones will help it sell a paid version of ChatGPT, which costs around $20 a month for individuals. Apple would take its 30% cut of these subscriptions as is customary for in-app purchases.
Sometime in the future, Apple hopes to strike revenue-sharing agreements with AI partners in which it gets a cut of the revenue generated from integrating their chatbots with the iPhone, according to Bloomberg, which first reported details of the deal.
That sounds a lot like the revenue sharing deal Apple has with Google for search in Safari — a deal (which is at some degree of risk from Google’s own antitrust problems) that now results in Google paying Apple over $20 billion per year for the traffic Safari sends to Google Search.
In hindsight, we now know that web browsers, in and of themselves, don’t generate any money directly. Someone was going to give a good one away free and now almost all of them are free of charge. But that doesn’t mean it isn’t essential for a platform to own and control its own browser. Web search, it turns out, is where the money is on the World Wide Web. Not just some money but an almost unfathomable amount of money. Web search is not primary technology for Apple’s platforms. But because they own and control Safari and WebKit, and Safari and WebKit are very good (so that most of Apple’s customers use them), Apple is in a position to profit very handsomely from web search, even though it doesn’t even have a search engine to speak of. Apple’s net annual profit the last few years has been around $95 billion. If we assume Google’s $20B/year traffic acquisition revenue sharing payments to Apple are mostly profit, that means somewhere between 20–25 percent of all Apple’s profit comes from that deal.
So are LLMs more like browsers (platforms need to own and control their own, but they won’t make money from them directly) or like web search (dominant platforms like Apple’s don’t need their own, but Apple can profit handsomely by charging for integration with their platforms)?
I think the answer is somewhere in between. Browsers are essential to personal computing platforms because they run on-device. Web search isn’t essential to own and control because it runs in the cloud, but exists only to serve users running devices. LLMs run both locally and in the cloud. If it takes Apple as long to have its own competitive LLMs as it did to have its own competitive web browser, I suspect they’ll soon be paying to use the LLMs that are owned and controlled by others, not charging the others for the privilege of reaching Apple’s platform users. No simple analogy captures this dynamic. But the threat is palpable.
I will say, though, “Liquid Glass” sounds cool. ★
From his family, on Atkinson’s Facebook page:
We regret to write that our beloved husband, father, and stepfather Bill Atkinson passed away on the night of Thursday, June 5th, 2025, due to pancreatic cancer. He was at home in Portola Valley in his bed, surrounded by family. We will miss him greatly, and he will be missed by many of you, too. He was a remarkable person, and the world will be forever different because he lived in it. He was fascinated by consciousness, and as he has passed on to a different level of consciousness, we wish him a journey as meaningful as the one it has been to have him in our lives. He is survived by his wife, two daughters, stepson, stepdaughter, two brothers, four sisters, and dog, Poppy.
One of the great heroes in not just Apple history, but computer history. If you want to cheer yourself up, go to Andy Hertzfeld’s Folklore.org site and (re-)read all the entries about Atkinson. Here’s just one, with Steve Jobs inspiring Atkinson to invent the roundrect. Here’s another (surely near and dear to my friend Brent Simmons’s heart) with this kicker of a closing line: “I’m not sure how the managers reacted to that, but I do know that after a couple more weeks, they stopped asking Bill to fill out the form, and he gladly complied.”
Some of his code and algorithms are among the most efficient and elegant ever devised. The original Macintosh team was chock full of geniuses, but Atkinson might have been the most essential to making the impossible possible under the extraordinary technical limitations of that hardware. Atkinson’s genius dithering algorithm was my inspiration for the name of Dithering, my podcast with Ben Thompson. I find that effect beautiful and love that it continues to prove useful, like on the Playdate and apps like BitCam.
In addition to his low-level contributions like QuickDraw, Atkinson was also the creator of MacPaint (which to this day stands as the model for bitmap image editors — Photoshop, I would argue, was conceptually derived directly from MacPaint) and HyperCard (“inspired by a mind-expanding LSD journey in 1985”), the influence of which cannot be overstated.
I say this with no hyperbole: Bill Atkinson may well have been the best computer programmer who ever lived. Without question, he’s on the short list. What a man, what a mind, what gifts to the world he left us.
Kyle Hughes, in a brief thread on Mastodon last week:
At work I’m developing a new iOS app on a small team alongside a small Android team doing the same. We are getting lapped to an unfathomable degree because of how productive they are with Kotlin, Compose, and Cursor. They are able to support all the way back to Android 10 (2019) with the latest features; we are targeting iOS 16 (2022) and have to make huge sacrifices (e.g Observable, parameter packs in generics on types). Swift 6 makes a mockery of LLMs. It is almost untenable.
This wasn’t the case in the 2010s. The quality and speed of implementation of every iOS app I have ever worked on, in teams of every size, absolutely cooked Android. [...] There has never been a worse time in the history of computers to launch, and require, fundamental and sweeping changes to languages and frameworks.
The problem isn’t necessarily inherent to the design of the Swift language, but that throughout Swift’s evolution Apple has introduced sweeping changes with each major new version. (Secondarily, that compared to other languages, a lower percentage of Swift code that’s written is open source, and thus available to LLMs for use in training corpuses.) Swift was introduced at WWDC 2014 (that one again) and last year Apple introduced Swift 6. That’s a lot of major version changes for a programming language in one decade. There were pros and cons to Apple’s approach over the last decade. But now there’s a new, and major con: because Swift 6 only debuted last year, there’s no great corpus of Swift 6 code for LLMs to have trained on, and so they’re just not as good — from what I gather, not nearly as good — at generating Swift 6 code as they are at generating code in other languages, and for other programming frameworks like React.
The new features in Swift 6 are for the better, but, in a group chat, my friend Daniel Jalkut described them to me as, “I think Swift 6 changed very little, but the little it changed has huge sweeping implications. Akin to the switch from MRR to ARC.” That’s a reference to the change in Objective-C memory management from manual retain/release (MRR) to automatic reference counting (ARC) back in 2011. Once ARC came out, no one wanted to be writing new code using manual retain/release (which was both tedious and a common source of memory-leak bugs). But if LLMs had been around in 2011/2012, they’d only have been able to generate MRR Objective-C code because that’s what all the existing code they’d been trained on used.
I’m quite certain everyone at Apple who ought to be concerned about this is concerned about it. The question is, do they have solutions ready to be announced next week? This whole area — language, frameworks, and tooling in the LLM era — is top of mind for me heading into WWDC next week.
Thomas Ptacek:
LLMs can write a large fraction of all the tedious code you’ll ever need to write. And most code on most projects is tedious. LLMs drastically reduce the number of things you’ll ever need to Google. They look things up themselves. Most importantly, they don’t get tired; they’re immune to inertia.
Think of anything you wanted to build but didn’t. You tried to home in on some first steps. If you’d been in the limerent phase of a new programming language, you’d have started writing. But you weren’t, so you put it off, for a day, a year, or your whole career.
I can feel my blood pressure rising thinking of all the bookkeeping and Googling and dependency drama of a new project. An LLM can be instructed to just figure all that shit out. Often, it will drop you precisely at that golden moment where shit almost works, and development means tweaking code and immediately seeing things work better. That dopamine hit is why I code.
Ptacek says he mostly writes in Go and Python, and his essay doesn’t even mention Swift. But the whole essay is worth keeping in mind ahead of WWDC. There is no aspect of the AI revolution where Apple, right now today, is further behind than agentic LLM programming. (Swift Assist, announced and even demoed last year at WWDC, would have been a first step in this direction, but it never shipped, even in beta.)
My thanks to WorkOS for sponsoring last week at DF. Modern authentication should be seamless and secure. WorkOS makes it easy to integrate features like MFA, SSO, and RBAC. Whether you’re replacing passwords, stopping fraud, or adding enterprise auth, WorkOS can help you build frictionless auth that scales.
New features they launched just last month include:
Future-proof your authentication stack with the identity layer trusted by OpenAI, Cursor, Perplexity, and Vercel.
I don’t use the web interface to Movable Type, my moribund-but-works-just-great CMS, very often. But I was using it today and noticed something odd. Next to the small-text metadata that says I’ve written 35,086 entries in total, it said I had one draft. One. I don’t use the drafts feature in Movable Type — my drafts are stored locally as text files in BBEdit or unpublished posts in MarsEdit. I didn’t recall ever saving a draft in Movable Type, but, I thought to myself, I probably did it from my phone — which is the one device where I do publish and edit posts through the MT web interface because (to my knowledge) there’s no equivalent of MarsEdit for iOS.
It was a Linked List post pointing to Bob Lefsetz’s reaction to the then-new Beats acquisition by Apple for $3 billion, which was considered a lot of money for an acquisition at the time. The blockquote wasn’t fully Markdown-formatted yet — which is sort of tedious for me on the phone, but a single keyboard shortcut in either BBEdit or MarsEdit on my Mac. That’s probably why I left it as a draft. So, just now, I finished the formatting, and changed it from draft to published. Voila — a post I wrote on 1 June 2014 that hadn’t been published until a few minutes ago. I suspect many of you will think Lefsetz’s 2014 remarks on Tim Cook ring more true today than they did then. Others (I’m more in this camp) look at Lefsetz’s 2014 remarks as more than a little absurd — the only mark Jimmy Iovine left at Apple was the record for being the least prepared executive ever to appear on stage in a keynote. He was like Biden at the debate up there.
Lending strong credence to my theory that this forgotten draft was created on my phone is that 1 June 2014 was the Sunday before WWDC 2014, when I’d have been travelling, and thus using my phone for posting. Funny coincidence that I happened to notice it today, on the cusp of WWDC 2025.
A brief follow-up to my love letter to Apple’s discontinued MagSafe Battery Pack this week. I wrote:
They’re the only Lightning devices left in my life and they’re so good I’m happy to still keep one Lightning cable in my travel bag to use them.
Among its other unique bits of cleverness, Apple’s MagSafe Battery Pack supports another cool feature: when attached to your phone, you can plug the charging cable into the phone, and after the phone gets to 100 percent charge, the phone will recharge the connected battery pack. So, if you own a MagSafe Battery Pack, you can recharge it even if you don’t have a Lightning cable handy. Just attach it to your iPhone and plug your USB-C cable into the phone, not the battery pack. I’m not aware of any other battery packs that support this.
That said, I still keep that one Lightning cable in my travel bag for the MagSafe Battery Pack because I want to be able to charge it whenever I want. Like, say, if I want to leave it behind, recharging, while I go elsewhere with my iPhone. Also, I like using the MagSafe Battery Pack as my bedside MagSafe charger. I like being able to check my phone from bed without worrying about a cable. In fact, I use one of my MagSafe Battery Packs as my bedside charger at home, not just while travelling.
Such a great little device. Really hope they make a sequel.
WhatsApp, on their official blog back in April 2023:
Last year, we introduced the ability for users globally to message seamlessly across all their devices, while maintaining the same level of privacy and security.
Today, we’re improving our multi-device offering further by introducing the ability to use the same WhatsApp account on multiple phones.
A feature highly requested by users, now you can link your phone as one of up to four additional devices, the same as when you link with WhatsApp on web browsers, tablets and desktops. Each linked phone connects to WhatsApp independently, ensuring that your personal messages, media, and calls are end-to-end encrypted, and if your primary device is inactive for a long period, we automatically log you out of all companion devices.
When I wrote about WhatsApp finally shipping for iPad earlier this week, I mentioned that you couldn’t use a secondary phone as a linked device to your primary phone. That used to be true, but obviously, I missed that this changed two years ago. Glad to know it. I’ve already added my Android burner and my spare iPhone that I use for summer iOS betas. WhatsApp has a support document on linking devices that explains the somewhat hidden way you do this with a secondary phone. My thanks to several readers who pointed me to this.
This makes it seem all the more spiteful, though, that Meta didn’t allow the iPhone version of WhatsApp to run on iPads (like they do with the still-iPhone-only Instagram app). I heard from a little birdie this week — second- or maybe even third-hand, so take it with a grain of salt — that Meta had this WhatsApp for iPad version ready to go for a while, and has been more or less sitting on an iPad version of Instagram, as a couple negotiating chits with Apple. Negotiating for what, I don’t know. But if that’s true, perhaps some (but definitely not all) of the ice has thawed between the two companies. I don’t see it happening, but it sure would get a big audience response if Instagram for iPad got some sort of announcement during the WWDC keynote, perhaps as part of an “iPadOS is now a fuller, more complete, computing experience than ever” segment.
One other oddity I encountered, when adding my Android phone as a linked device: by design, there is no way to sign out of WhatsApp on your primary iOS or Android device. If you are signed in to WhatsApp using another phone number, the only way to sign out on that device and then set it up as a linked device to your primary WhatsApp account is to delete WhatsApp from your phone and reinstall it. Weird.
Tom Warren, writing for The Verge:
“The experience supports Markdown style input and files for users who prefer to work directly with the lightweight markup language,” explains Dave Grochocki, principal product manager lead for Microsoft’s Windows inbox apps. “You can switch between formatted Markdown and Markdown syntax views in the view menu or by selecting the toggle button in the status bar at the bottom of the window.”
Since Notepad is usually used with plain text, you can also easily clear all formatting from the formatting toolbar or from the edit menu in the app. If you’re not a fan of the lightweight formatting options, you can also fully disable this new support in the Notepad app settings. [...]
Like I wrote in my Notepad newsletter earlier this week, it’s amazing that Microsoft barely touched Notepad for decades, and now it’s gone from basic log file reader to writing messages itself. A lot of Notepad’s new features have arrived since Microsoft decided to remove WordPad from Windows, after nearly 30 years.
This is getting ridiculous.
I posted this update a bit ago, but it’s worth making a separate post so you don’t miss it if you read the original post before I added the update:
It goes without saying that any consumer survey is only as good as the surveyor. But CIRP, in particular, has posted some dubious ones, to say the least. Jeff Johnson pointed out on Mastodon that back in 2023, CIRP published a survey that claimed the Mac Pro accounted for 43 percent of all Mac desktop sales, with the Mac Mini and Mac Studio each accounting for only 4 percent each. That’s just bananas. That’s not like maybe wrong, that’s not gotta be a little wrong, that’s how could anyone publish this? wrong. It’s hard to believe anything from CIRP after they published that.
I think it’s become tradition for Mark Gurman to run a mega spoiler report on the WWDC keynote the Friday before. Don’t read it if you don’t want to see a lot of genuine spoilers. But here are a few non-spoilers:
The AI changes will be surprisingly minor and are unlikely to impress industry watchers, especially considering the rapid pace of innovation by Alphabet Inc.’s Google, Meta Platforms Inc., Microsoft Corp. and OpenAI.
I don’t know a single person who will be surprised if Apple’s in-house AI changes are minor. Literally, not one. The only way for Apple to surprise on the AI front would be for the improvements to be major. Who’s the guy who will be surprised by underwhelming advances on the AI front from Apple next week? Artie MacStrawman?
While there has been speculation that the app icons will be round to match the style on the Apple Watch and Vision Pro, the shape is staying largely the same on the iPhone and iPad.
Always beware the passive voice. “There has been speculation”? It was Gurman’s own report, back in March, that left some with the decided impression that Apple was making icons circular across all platforms under the nonsensical argument that users find it jarring to see differently-shaped icons on different devices. Gurman, back in March:
A key goal of the overhaul is to make Apple’s different operating systems look similar and more consistent. Right now, the applications, icons and window styles vary across macOS, iOS and visionOS. That can make it jarring to hop from one device to another. [...]
VisionOS differs from iOS and macOS in the use of circular app icons, a simplified approach to windows, translucent panels for navigation, and a more prominent use of 3D depth and shadows.
My guess is that if Apple does go with circular icons across all platforms next week (which I sure hope they don’t because that seems dumb), Gurman will take credit for calling it back in March, despite writing today that “the shape is staying largely the same”. Heads, Gurman wins. Tails, Gurman doesn’t lose.
Back to today’s mega-spoiler report:
The Camera app will be revamped with a focus on simplicity. Apple has added several new photo and video-taking options in recent years — including spatial video, panorama and slow-motion recording — and that’s made today’s interface a bit clunky. In iOS 26 and iPadOS 26, Apple is rethinking the approach.
I can’t recall seeing Gurman ever, not even once, crediting anyone else for scooping anything first. Jon Prosser made an entire video about the supposed new Camera app design all the way back on January 17, replete with animated mockups of how it will look and work. (Looks pretty clever to me, starting with a back-to-basics simple focus on two main modes — Photo or Video — and putting all other sub-modes under those.)
Sarah Perez, TechCrunch, “The Trump-Musk Feud Has Been Great for X, Which Jumped Up the App Store Charts”:
The feud between Elon Musk and President Donald Trump may be bad for the MAGA camp, but it’s proven to be beneficial for X, which has seen engagement soaring over the past 24 hours. According to new data from Sensor Tower, the app formerly known as Twitter skyrocketed up the U.S. App Store’s top charts, landing at the 23rd overall spot on June 5 — up from an average rank of No. 68 over the last 30 days.
X also saw an average rank of No. 58 over the past six months.
I wouldn’t call jumping from the 60s to the 20s “skyrocketing”, but, up is up.
Trump’s own social network, Truth Social, benefited from the feud as well, as the president shared his thoughts about Musk with his followers. Compared with the last seven days, U.S. mobile app active users on Truth Social increased by more than 400%, Sensor Tower’s proprietary panel indicates.
However, X is still much larger than Truth Social, Sensor Tower notes, as it has 100× more U.S. mobile app users than Trump’s social network.
In a couple of recent posts I’ve referred to Truth Social as Trump’s “blog”. (I expounded upon this in a recent episode of The Talk Show, with Stephen Hackett, starting after the 1h:20m mark, so if you listened to that, the following will sound familiar.) Long before this easily-predictable breakup of two unstable sociopathic egomaniacs (and who would be genuinely surprised if Musk is back in the Oval Office next week?), I’ve long wondered about one particular aspect of their alliance. To wit: that Musk famously spent $44 billion to buy Twitter/X, and envisions it as a world-conquering “everything app”. Trump was once the best known user on Twitter, but after being kicked off every legitimate social network after trying to overthrow the 2020 election, Trump started his own ostensible Twitter-like network, Truth Social. Ostensibly, they’re in direct competition with each other.
Did Musk ever pitch Trump on shutting Truth Social down and coming back to X full time? How about buying Truth Social for a bribery price of a cool $1 billion to sweeten the deal? Truth Social isn’t really a functional social network. Let’s stipulate that Truth Social has 1 percent the active US mobile users of X. Even that’s a sham. Nobody of note other than Trump himself uses Truth Social. For all the pathetic obsequiousness of every single lickspittle official serving in the Trump 2.0 administration, none of them are even vaguely active on Truth Social. They’re all still on X. The Justice Department posts to its official account on X multiple times per day, but the DOJ doesn’t even have an account on Truth Social. OK, JD Vance has posted to his Truth Social account about a dozen times in the last month. But he’s posted to X five times today.
What I’ve realized is that Truth Social is essentially just Trump’s blog. Truth Social is exceedingly unpopular when judged as a social network; but it’s exceedingly successful as a blog. All the other people using Truth Social are effectively reading his blog and shitposting comments and memes in response to his posts, trying to get his attention. I’ve been thinking about this for a few weeks, and in that time, Trump’s own posts on Truth Social have made the news on a near-daily basis. I’ve never once, ever, seen a post from anyone else on Truth Social make the news. Trump is not just the one and only person of consequence using it, his is the one and only account on Truth Social that you ever, ever hear about.
If Truth Social were actually meant to compete with X, Threads, Bluesky, and Mastodon, this almost certainly would have been a source of conflict between Trump and Musk. Because, if it were meant to be an actual competitive social network, it would occur to Trump to require all his flunkeys and toadies not only to post to Truth Social, but to stop posting to X. But he hasn’t done that, because Truth Social is functioning as intended: it’s just an outlet for Trump to spew his demented mad-king musings (today he’s retweeting calls for him to be added to Mount Rushmore) and, most importantly, get some of his all-caps-laden bangers read aloud on the TV news. ★