By John Gruber
CoverSutra Is Back from the Dead — Your Music Sidekick, Right in the Menu Bar
New app, spearheaded by Ev Williams:
Mozi is a private social network for seeing your people more, IRL. Add your plans, check who’s in town, and know when you overlap.
iOS only at the moment, with “Sign in with Apple” as the only supported authentication method. One clever idea is that you can share travel plans and your location, and Mozi will coordinate when you might be in the same area as a friend. From their FAQ:
Why do you need access to my contacts? Will you ever contact people in my phone book?
Never. We ask for access to your contacts so that you can connect with the people you already know on Mozi. In order to see someone on Mozi, you have to both be on Mozi and both have one another saved as iOS contacts. We never send, sell or share any of your information, and we will never contact your people on your behalf. And instead of storing any actual phone numbers, we hash (encrypt) them. This ensures both your number and all your contacts remain anonymized and protected.
I’m in, and so far only have three mutuals. But — all three of them are people whose in-person company I truly enjoy. We’ve all, correctly, got our guards up regarding new “social” platforms that want our personal information, but we’ve collectively become so cynical that I worry people don’t even want to try fun new things like Mozi. Ev Williams is uniquely placed to make something like this happen in a trustworthy way.
I’m mostly rooting for Mozi to succeed because I think something like this could work in a way that has nothing but upsides, and there’s nothing like it today. But I’m also rooting for Mozi to take off just to burst the absurdity of Kevin Roose’s October piece in the New York Times trying to make the case that Apple “killed social apps” by increasing the privacy controls for our iOS contacts. I gladly shared my whole contacts list with Mozi, based on the track record of the team and the FAQ quoted above.
Speaking of the NYT, Erin Griffith wrote a profile of Williams for the launch of Mozi:
“The internet did make us more connected,” he said in an interview in Menlo Park, Calif. “It just also made us more divided. It made us more everything.”
Mozi is meant to be a utility. If a user wants to message a friend in the app to make plans, the app directs them to the phone’s texting app.
One last Letterman link: a new half-hour interview about interviewing with Zach Baron for GQ. I watched the first minute and I’m saving the rest for tonight:
Baron: If you read pieces about you — pieces of press, profile stuff like that — from the ’80s and ’90s, even a little bit in the 2000s, you were often betrayed as miserable.
Letterman: (laughs uproariously) Yeah, that’s great. I love that.
The New York Times:
George J. Kresge, who as the entertainer the Amazing Kreskin used mentalist tricks to dazzle audiences as he rose to fame on late-night television in the 1970s, died on Tuesday in Wayne, N.J. He was 89. A close friend, Meir Yedid, said the death, at an assisted living facility, was from complications of dementia.
Kreskin’s feats included divining details of strangers’ personal lives and guessing at playing cards chosen randomly from a deck. And he had a classic trick at live shows: entrusting audience members to hide his paycheck in the auditorium, and then relying on his instincts to find it — or else going without payment for a night.
Somehow his first appearance with Letterman wasn’t until 1990, but after that he was a regular. Just a canonical “late night talk show guest” of that era. He was good at the mentalist tricks, but what made Kreskin great — amazing even — was that he was just such a weird, fun, and funny guy.
In addition to two choices for t-shirts, the new DF Paraphernalia store also has the above hoodies, which are pretty nice, I have to say. I particularly like the drawstrings, which are much more substantial, almost rope-like, than the shoelace-like strings on most hoodies. I wear mine a lot, especially in the winter, as an extra layer. You’d look good in one.
Here’s the thing. The store will not be open year-round. We’re taking orders now, printing to meet demand, and then we’re going to close it down. Order tonight or tomorrow, and if you’re in the U.S., yours should arrive before Christmas. International orders — even those ordered by our good neighbors in Canada — most likely will not.
Wayne Ma and Qianer Liu, in a piece today for The Information (paywalled up the wazoo, sadly), “Apple Is Working on AI Chip With Broadcom”:
Apple is developing its first server chip specially designed for artificial intelligence, according to three people with direct knowledge of the project, as the iPhone maker prepares to deal with the intense computing demands of its new AI features. Apple is working with Broadcom on the chip’s networking technology, which is crucial for AI processing, according to one of the people. If Apple succeeds with the AI chip — internally code-named Baltra and expected to be ready for mass production by 2026 — it would mark a significant milestone for the company’s silicon team. [...]
Broadcom typically doesn’t license its intellectual property, choosing instead to sell chips directly to customers. In its arrangement with Google, for instance, Broadcom translates Google’s AI chip blueprints into designs that can be manufactured, oversees its production with TSMC and sells the finished chips to Google at a markup.
But Broadcom appears to be taking a different tack with Apple. Broadcom is providing a more limited scope of design services to Apple while still providing the iPhone maker with its networking technology, one of the people said. Apple is still managing the chip’s production, which TSMC will handle, another person said. Additional details of the business arrangement couldn’t be learned [sic]1
I’ll go out on a limb and say that it’s Apple choosing to take a different tack with Broadcom than Google did, rather than a choice in any way driven by Broadcom. The Information’s own “arrangement with Google” link above points to this year-ago report that opens: “Google executives have extensively discussed dropping Broadcom as a supplier of artificial intelligence chips as early as 2027, according to a person with direct knowledge of the effort. In that scenario, Google would fully design the chips, known as tensor processing units, in-house, the person said. The move could help Google save billions of dollars in costs annually as it invests heavily in AI development, which is especially pricey compared to other types of computing.” Why would Apple ever agree to an arrangement like that?
The hint of obsequiousness to Broadcom suggests to me, pretty clearly, that it’s sources from Broadcom who provided the leaks for this story.
Anyway, what really caught my eye in this report wasn’t the AI server chips, but rather the following (emphasis to key paragraph added), included seemingly only as an aside even though I thought it was the most interesting nugget in the report (vague shades of Fermat’s Last Theorem):
Apple’s silicon design team in Israel is leading development of the AI chip, according to two of the people. That team was instrumental in designing the processors Apple introduced in 2020 to replace Intel chips in Macs.
Apple this past summer canceled the development of a high-performance chip for Macs — consisting of four smaller chips stitched together — to free up some of its engineers in Israel to work on the AI chip, one of the people said, highlighting the company’s shifting priorities.
To make the chip, Apple is planning to use one of TSMC’s most advanced manufacturing processes, known as N3P, said three people with direct knowledge. That would be an improvement over the manufacturing process used for Apple’s latest computer processor, the M4.
What they’re talking about regarding a cancelled high-end Mac chip would be a hypothetical M-series chip with (effectively) double the specs of an Ultra, which I presume would only be available in a future Mac Pro, and, just pulling adjectives from Apple’s marketing dictionary, I’d bet would be called the “M# Extreme” (where “#” is the M-series generation number). The M1 and M2 Ultra chips are, effectively, two M1/M2 Max chips fused together with something called a silicon interposer that offers extremely high-speed I/O between the fused chips. Performance doesn’t exactly double, but it comes close. A hypothetical quad-Max “Extreme” would effectively double the performance of the same-generation Ultra chips. Such a chip, available exclusively in the Mac Pro, would give the Mac Pro a much more obvious reason to exist alongside the Mac Studio (which, to date, has offered Max and Ultra chip configurations).
But if Apple’s work on that quad-interposed M-series chip was cancelled only “this past summer”, and was for a generation of chips using TSMC’s next-generation N3P process, that would mean it was slated for the M5 or M6 generation, not the M4.2 The M4 generation is fabbed using TSMC’s N3E process, and any additional variants beyond the M4 Max, slated for updated Mac Studios and Mac Pros next year, were designed long before this past summer.
I feel like it’s a lock that there will be an M4 Ultra chip next year, with the performance of two M4 Max chips fused together. Or, perhaps the M4 Ultra will be a standalone design, not two Max chips fused. The M-series Max chips have always been their own designs — not two Pro chips fused together. The same could be true for Ultra chips, starting next year, or some generation further into the future.
But I’ve had my fingers crossed that we’ll also see an “M4 Extreme” — or whatever Apple would decide to call a tier above “Ultra” — sooner rather than later. If The Information’s reporting is correct, however, either we’ll see a quad-Max M4 chip next year, and then it will skip a generation because the engineering team was redirected to work on these AI server chips, or, those engineers were working on the first quad-Max M-series chips, and now the first such M-series chips have been punted even further into the future, if ever. Today’s report has me thinking, sadly, that could be a few years off, at the soonest. ★
That sic is for the missing sentence-ending period. I expect better copy editing from a $400/year subscription (soon going to $500) that keeps badgering me, every time I visit the site, to upgrade to a $1,000/year “Pro” subscription tier. But while I’m slagging on The Information for this sentence, the missing period is the least of its problems. “Additional details of the business arrangement couldn’t be learned” is some passive voice bullshit. What they mean is that Wayne Ma and Qianer Liu were unable to learn any additional details, not that additional details of the business arrangement between Apple and Broadcom are some sort of unknowable information — you know, like the answer to why I continue paying so much money to subscribe to a publication that annoys me. ↩︎
Or even the M7 generation. The lead times on chip designs are measured in years, plural. Back in July 2023, just after the release of the M2-generation Mac Studio models (offering the M2 Max and M2 Ultra) and the first — and so far only — Apple silicon Mac Pro (M2 Ultra), Jason Snell and Myke Hurley got the following tidbit from an anonymous listener of their podcast Upgrade (episode 468; transcript). Hurley read it on air, right up front around the 4:00 mark:
I am an Apple engineer working on the GPU team.
It pains me to say that Jason’s speculation is correct. The quad chip has been canned with no plans to return. For context, we are actively developing what will presumably be the M5 chip. And the quad chip was only ever specced for the M1 and removed late in the project. There are no plans to create a quad chip through at least the M7 generation. My understanding is that the quad required too much effort for too small of a market. Something interesting that may come in the M8 in future generations is called multi-die packaging. This allows the CPU and GPU parts of the chip to be fabricated on different dies and packaged together much like how two max chips make an ultra. With this design, it is conceivable that we could have three, four, or five or more GPU dies with one or two CPU for a graphics powerhouse or vice versa for a CPU workstation that doesn’t need as much GPU grunt. However, as far as I know, no such plans exist yet.
Take that with however many grains of salt you think necessary to season a comment from an anonymous person, but it doesn’t hit a single false note to my ears. And if this little Upgrade birdie was legit, that would suggest that the Israeli chip engineers reassigned from an advanced 4× Mac chip this past summer to work on a new AI server chip would have been working on the M6 generation of Apple silicon, for products launching in 2026–2027. ↩︎︎
Erik Hayden, reporting for The Hollywood Reporter:
For his next move, David Letterman is jumping in to the increasingly crowded free, ad-supported TV channel (FAST) space.
The late-night great’s production company Worldwide Pants has inked a deal with Samsung TV Plus to bring around 4,000 hours of original video to the company’s streaming service, the firms said Wednesday. “I’m very excited about this,” stated Letterman, who glibly added, “Now I can watch myself age without looking in the mirror!”
The output for the 24/7 on-demand channel titled Letterman TV appears to rely heavily on archival clips from his nearly 33-year late-night run, including his CBS Late Show Top Ten lists, “Stupid Tricks” segments, interviews with stars, holiday specials and behind-the-scenes clips along with fresh commentary from Letterman, presumably on all the above.
I don’t know how different this will be from Letterman’s excellent YouTube channel, but honest to god I’d never even heard of “Samsung TV Plus” until reading this.
Juli Clover at MacRumors:
Apple today made a mistake with its macOS Sequoia 15.2 update, releasing the software for two Macs that have yet to be launched. There is a software file for “Mac16,12” and “Mac16,13,” which are upcoming MacBook Air models.
The leaked software references the “MacBook Air (13-inch, M4, 2025)” and the “MacBook Air (15-inch, M4, 2025),” confirming that new M4 MacBook Air models are in development and are likely not too far off from launching.
It’s been widely rumored that Apple is working to bring the M4 chips to its entire Mac lineup, and the MacBook Air is expected to get an M4 refresh in the spring of 2025, so sometime between March and June.
Were these references not in the 15.2 betas? If not, what a weird mistake to happen only in the release builds. But regardless, even inside Apple, I’d file this under “no big whoop”. Of course there are going to be M4-based MacBook Airs next year. The only question is when. My guess is March, just like last year.
David Ingram, reporting for NBC News:
U.S. Bankruptcy Judge Christopher Lopez said after a two-day hearing that The Onion’s parent company, Global Tetrahedron, had not submitted the best bid and was wrongly named the winner of an auction last month by a court-appointed trustee.
“I don’t think it’s enough money,” Lopez said in a late-night ruling from the bench in a Houston court. “I’m going to not approve the sale.”
It’s not over ’til it’s over.
Brandon Silverman:
It was September of 2011 and I saw a link on kottke.org to a small collection of incredible typography from something called the Sanborn Fire Insurance Maps. I had never seen them before and they blew my mind. I immediately became a massive fan and in fact, when I got married, my wife and I designed our wedding invitation based off of them.
However, there has never been a place to see all of the art from the maps in one place. Until now.
This website is a free archive dedicated exclusively to creating a one-stop shop for all the incredible typography and art of the Sanborn maps. It includes almost 3,500 unique decorative titles, all drawn before 1923. While large portions of the original maps have been digitized and archived in various places both online and offline, there has never been a comprehensive collection of all of the decorative titles from the Sanborn maps. I hope you enjoy!
I just love this style of turn-of-the-century typography and graphic design. (The last turn of the century, that is.) In our era, this style has been used to wonderful effect by the great Chris Ware.
Via, no surprise, Kottke. What comes around goes around.
Finally, Daring Fireball t-shirts and hoodies are back. Order now, and we’ll start printing shirts at the end of this week. U.S. domestic orders placed by the end of the day Wednesday should arrive before Christmas. International orders — even those ordered by our good neighbors in Canada — most likely will not.
Mark Gurman, in his Power On column for Bloomberg:
Apple is now working on a major effort to support third-party hand controllers in the device’s visionOS software and has teamed up with Sony Group Corp. to make it happen. Apple approached Sony earlier this year, and the duo agreed to work together on launching support for the PlayStation VR2’s hand controllers on the Vision Pro. Inside Sony, the work has been a monthslong undertaking, I’m told. And Apple has discussed the plan with third-party developers, asking them if they’d integrate support into their games. [...]
One hiccup is that Sony doesn’t currently sell its VR hand controllers as a standalone accessory. The company would need to decouple the equipment from its own headset and kick off operations to produce and ship the accessory on its own. As part of the arrangement, Sony would sell the controllers at Apple’s online and retail stores, which already offer PS5 versions.
My thanks to 1Password — which, earlier this year, acquired frequent DF sponsor Kolide — for sponsoring last week at DF. Imagine if you went to the movies and they charged $8,000 for popcorn. Or, imagine you got on a plane and they told you that seatbelts were only available in first class. Your sense of outraged injustice would probably be something like what IT and security professionals feel when a software vendor hits them with the dreaded SSO tax — the practice of charging an outrageous premium for Single Sign-On, often by making it part of a product’s “enterprise tier”. The jump in price can be astonishing — one CRM charges over 5000% more for the tier with SSO. At those prices, only very large companies can afford to pay for SSO. But the problem is that companies of all sizes need it.
Until outraged customers can shame vendors into getting rid of the tax, many businesses have to figure out how to live without SSO. For them, the best route is likely to be a password manager, which also reduces weak and re-used credentials, and enables secure sharing across teams. And a password manager is likely a good investment anyway, for apps that aren’t integrated with SSO. To learn more about the past, present, and future of the SSO tax, read 1Password’s full blog post.
While there is no subscription offering for Daring Fireball (never say never again), I am reminded this week to remind you that, if you enjoy podcasts, you should subscribe to Dithering, the twice-weekly 15-minutes-on-the-button podcast I do with Ben Thompson. Dithering as a standalone subscription costs just $7/month or $70/year. People who try Dithering seem to love it, too — we have remarkably little churn.
Recording the show often helps me coagulate loose ideas into fully-formed thoughts. Both my Tuesday column on Intel’s decline and today’s on using generative AI for research were inspired by our discussion on the show the night before. I toss a lot of takes out on Dithering that never make it here, though. If you’re on the fence, subscribe for a month and you’re only out $7 — but I bet you’ll stick around. Trust me. And thanks to everyone who’s already subscribed.
Late-breaking candidate for best new font of 2024.
Elizabeth Lopatto, writing for The Verge, “Stop Using Generative AI as a Search Engine”:
Now, a defender of AI might — rightly — say that a real journalist should check the answers provided by ChatGPT; that fact-checking is a critical part of our job. I agree, which is why I’ve walked you through my own checking in this article. But these are only the public and embarrassing examples of something I think is happening much more often in private: a normal person is using ChatGPT and trusting the information it gives them.
A mistake, obviously.
One advantage old-school Google Search has over the so-called answer engines is that it links directly to primary sources. Answer engines just give you an answer, and it’s often unclear what the source is. For me, using ChatGPT or Google’s AI function creates extra work — I have to go check the answer against a primary source; old Google Search just gave me that source directly.
Lopatto’s piece was prompted by a spate of historical bullshit people have been inadvertently propagating, after their asking generative AI systems for historical examples of presidents granting pardons to family members. Most notably, a column by Charles P. Pierce at Esquire this week — now fully retracted — the entire premise of which was a supposed pardon granted by George H.W. Bush to his black-sheep son Neil Bush. No such pardon was granted.1
Lopatto’s piece is excellent, particularly the way she shows her own work. And the entire premise of her piece is that people are, in fact, embarrassing themselves (in Pierce’s case, spectacularly) and inadvertently spreading misinformation by blindly trusting the answers they’re getting from generative AI models. But I think it’s wrong to argue flatly against the use of generative AI for research, as she does right in her headline. I’ve been late to using generative AI as anything other than a toy curiosity, but in recent months I’ve started using it for work-related research. And now that I’ve started, I’m using it more and more. My basic rule of thumb is that if I’m looking for an article or web page, I use web search (Kagi); if I’m looking for an answer to a question, though, I use ChatGPT (4o). I direct (and trust) ChatGPT as I would a college intern working as a research assistant. I expect accuracy, but assume that I need to double-check everything.
Here’s how I prompted ChatGPT, pretending I intended to write about this week’s political controversy du jour:
Give me a list of U.S. presidential pardons granted to family members, friends, administration officials, and cronies. Basically I’m looking for a list of controversial pardons. I’m interested in the totality of U.S. history, but particularly in recent history, let’s say the last 100 years.
ChatGPT 4o’s response was good: here’s a link to my chat, and an HTML transcript and a screenshot. (Only the screenshot shows where ChatGPT included sources.) I’m quite certain ChatGPT’s response is completely true, and it strikes me as a fair summary of the most controversial pardons in my lifetime. My biggest quibble is that it omits Trump’s pardon of Steve Bannon, a truly outrageous pardon of a genuine scumbag who was an official White House advisor. (Bannon was indicted for a multi-million dollar scheme in which he scammed thousands of political donors into believing they were contributing funds to help build Trump’s fantasy “border wall”.) However, my asking “Any more from Trump?” as a follow-up resulted in a longer list of 13 pardons, all factual, that included Bannon.2
I want to make clear that I don’t think Lopatto is in any way a head-in-the-sand Luddite. But all of the arguments being made today against using generative AI to answer questions sound exactly like the arguments against citing web pages as sources in the 1990s. The argument then was basically “Anyone can publish anything on the web, and even if a web page is accurate today, it can be changed at any time” — which was true then and remains true today.3 But it’s just a new technology — one that isn’t going anywhere because it’s incredibly useful in ways nothing else is, but its inherent downsides will force us to adapt and learn new ways of sourcing, citing, and verifying information. The rise of the web didn’t make libraries go away. Generative AI won’t make web search go away.
If I had wanted to write a column about presidential pardons, I’d find ChatGPT’s assistance a far better starting point than I’d have gotten through any general web search. But to quote an adage Reagan was fond of: “Trust, but verify.” ★
Worth noting this from Lopatto: “I emailed Hearst to ask if Esquire writer Charles P. Pierce had used ChatGPT as a source for his article. Spokesperson Allison Keane said he hadn’t and declined to say anything further about how the error might have occurred.” I find it unlikely that generative AI wasn’t involved somewhere in the chain of this falsehood that Bush pardoned his son, but whatever Pierce referenced to come upon it, he fucked up good. ↩︎
One small curiosity is that ChatGPT’s list, while mostly chronological, swapped Carter and Ford. One small amusement is that the only supposedly controversial pardon ChatGPT came up with for Ronald Reagan was New York Yankees owner George Steinbrenner. A complicated man, The Boss was. ↩︎︎
Who’s to say a dog doesn’t have useful information to provide? ↩︎︎
Purely fun, pay-whatever-you-think-fair app for the Mac from Simon Støvring (developer of numerous fine apps such as Runestone and Scriptable):
Festivitas automatically adds festive lights to your menu bar and dock upon launch and you can tweak their appearance to match your preferences.
There is something very core to the Mac’s origins about not just making a software toy like this, but putting effort into making everything about it really nice. Harks back to Steven Halls’s The Talking Moose and, of course, the undisputed king of the genre, Eric Shapiro’s The Grouch. Oh, and of course (thanks to Stephen Hackett for the reminder), Holiday Lights.
Update, Friday 6 December: Today’s 1.1 update brings several improvements, including making the lights look way cooler if your Dock is on the left or right (as god intended).
David Frum, writing at The Atlantic, regarding his jarring appearance as a guest on MSNBC’s Morning Joe:
Before getting to the article, I was asked about the nomination of Pete Hegseth as secretary of defense — specifically about an NBC News report that his heavy drinking worried colleagues at Fox News and at the veterans organizations he’d headed. [...] I answered by reminding viewers of some history:
In 1989, President George H. W. Bush nominated John Tower, senator from Texas, for secretary of defense. Tower was a very considerable person, a real defense intellectual, someone who deeply understood defense, unlike the current nominee. It emerged that Tower had a drinking problem, and when he was drinking too much he would make himself a nuisance or worse to women around him. And for that reason, his nomination collapsed in 1989. You don’t want to think that our moral standards have declined so much that you can say: Let’s take all the drinking, all the sex-pesting, subtract any knowledge of defense, subtract any leadership, and there is your next secretary of defense for the 21st century.
I told this story in pungent terms. It’s cable TV, after all. And I introduced the discussion with a joke: “If you’re too drunk for Fox News, you’re very, very drunk indeed.”
At the next ad break, a producer spoke into my ear. He objected to my comments about Fox and warned me not to repeat them. I said something noncommittal and got another round of warning. After the break, I was asked a follow-up question on a different topic, about President Joe Biden’s pardon of his son. I did not revert to the earlier discussion, not because I had been warned, but because I had said my piece. I was then told that I was excused from the studio chair. Shortly afterward, co-host Mika Brzezinski read an apology for my remarks.
Jesus. The abject obsequiousness is staggering. Yes, it’s a joke at Fox News’s expense. But Fox News — on-air — has indeed been backing Hegseth’s nomination, even though it’s quite obvious that everyone who works there knows he has an alcohol problem. From that NBC News report (note that despite their names, the MSNBC and NBC News newsrooms are no longer associated):
Pete Hegseth, President-elect Donald Trump’s pick for defense secretary, drank in ways that concerned his colleagues at Fox News, according to 10 current and former Fox employees who spoke with NBC News. Two of those people said that on more than a dozen occasions during Hegseth’s time as a co-host of Fox & Friends Weekend, which began in 2017, they smelled alcohol on him before he went on air. Those same two people, plus another, said that during his time there he appeared on television after they’d heard him talk about being hungover as he was getting ready or on set.
One of the sources said they smelled alcohol on him as recently as last month and heard him complain about being hungover this fall. None of the sources with whom NBC News has spoken could recall an instance when Hegseth missed a scheduled appearance because he’d been drinking. “Everyone would be talking about it behind the scenes before he went on the air,” one of the former Fox employees said.
Note too that Fox & Friends Weekend airs at 6:00 in the morning.
Oliver Darcy, in a well-sourced report at Status (paywalled, alas, but with a preview of the article if you sign up for the free version of his newsletter, which I agree is sort of a “Yeah, no thanks” offer):
Patrick Soon-Shiong is tightening his grip over the Los Angeles Times. The MAGA-curious owner, who drew controversy when he blocked the newspaper’s planned endorsement of Kamala Harris, has waded further into its operations since the November election, according to new information I have learned and public remarks the billionaire made Wednesday during a media appearance with right-wing personality Scott Jennings. [...] Several veteran staffers told me that morale has never been lower, with some people even wondering whether the newspaper will be disfigured beyond recognition under this new era of Soon-Shiong’s reign. [...]
One disturbing example came after the newspaper published an opinion piece in November about Elon Musk that Soon-Shiong did not care for, people familiar with the matter told me. The piece, written by Times opinion contributor Virginia Heffernan, carried the headline, “Elon Musk bought himself a starring role in Trump’s second term. What could go wrong?”
While the headline seemed innocuous, Soon-Shiong expressed dismay over it, according to the people familiar with the matter. The headline was allowed to remain unchanged. But, as a result, the people said, a new rule was put into place: Prior to publishing opinion stories, the headlines must be emailed over to Soon-Shiong, where he can then choose to weigh in. While it is normal for newspaper owners to influence the opinion wing of a newspaper, it is highly unusual for an owner to have article headlines sent to them ahead of publication for review.
That also seems like a lot of work for a busy billionaire. Wonder how he might handle that?
Speaking to Jennings as the latter hosted a radio show Wednesday, the billionaire revealed that, behind the scenes, he is working on developing a “bias meter” powered by artificial intelligence that will be placed on both opinion and news stories. Soon-Shiong said that the hope is to roll out the new feature, which will use the technology to seemingly warn readers that his own reporters are biased, as early as next month. [...]
Suffice to say, but when the journalists at the Times heard the “breaking news” that Soon-Shiong delivered to Jennings, they spiraled even further. “People are now deeply fucking concerned,” one staffer bluntly told me Wednesday night.
What could go wrong?
In response, the LAT Guild issued a statement, concluding:
The statements of Dr. Soon-Shiong in the press and on social media reflect his own opinions and do not shape reporting by our member-journalists.
Our members — and all Times staffers — abide by a strict set of ethics guidelines, which call for fairness, precision, transparency, vigilance against bias, and an earnest search to understand all sides of an issue. Those longstanding principles will continue guiding our work.
The Guild has secured strong ethics protections for our members, including the right to withhold one’s byline, and we will firmly guard against any effort to improperly or unfairly alter our reporting.
Stephanie Palazzolo, writing for The Information (paywalled, alas):
Researchers at OpenAI believe that some rival AI developers are training their reasoning models by using OpenAI’s o1 reasoning models to generate training data, according to a person who has spoken to the company’s researchers about it. In short, the rivals can ask the o1 models to solve various problems and then use the models’ chain of thought — the “thought process” the models use to solve those problems — as training data, the person said.
You might be wondering how rival developers can do that. OpenAI has explicitly said it hides its reasoning models’ raw chains of thought due in part to competitive concerns.
But in answering questions, o1 models include a summarized version of the chain of thought to help the customer understand how the models arrived at the answer. Rivals can simply ask another LLM to take that summarized chain of thought and predict what the raw chain of thought might have been, the person who spoke with the researchers said.
And I’m sure these OpenAI researchers are happy to provide this training data to competitors, without having granted permission, in the same way they trained (and continue to train) their own models on publicly available web pages, without having been granted permission. Right?
From The Stanford Review editor-in-chief Julia Steinberg’s interview with university president Jonathan Levin:
Stanford Review: What is the most important problem in the world right now?
President Levin: There’s no answer to that question. There are too many important problems to give you a single answer.
Stanford Review: That is an application question that we have to answer to apply here.
Alex Heath, writing at The Verge:
“I’m actually very optimistic this time around,” Bezos said of Trump during a rare public appearance at The New York Times DealBook Summit on Wednesday. “He seems to have a lot of energy around reducing regulation. If I can help him do that, I’m going to help him.”
Trump railed against Bezos and his companies — Amazon, Blue Origin, and The Washington Post — during his 2016 term. Bezos defended himself but it did little to help his reputation with Trump. Now, his companies have a lot at stake in the coming administration, from the FTC’s antitrust lawsuit against Amazon to Blue Origin’s efforts to compete with SpaceX for government contracts.
Onstage at the DealBook Summit on Wednesday, Bezos called Trump “calmer this time” and “more settled.” He said he will try to “talk him out of” the idea that the press, which includes The Washington Post, is an enemy of the people.
“You’ve probably grown in the last eight years,” he said to DealBook’s Andrew Ross Sorkin. “He has, too.”
Next up after Bezos at DealBook Summit was Charlie Brown, who professed optimism regarding his next attempt at kicking a football held by Lucy Van Pelt. What the fuck did they put in the water at this conference?
Or, perhaps, these very smart guys are also craven, and these nonsensical remarks, which are quite obviously contrary to reality, are simply additional exhibits of shameful cowardly compliance.
While writing the previous item regarding the FBI encouraging the use of E2EE text and call protocols, I wound up at the Play Store page for Google Messages. It’s shamefully misleading regarding Google Messages’s support for end-to-end encryption. As I wrote in the previous post, Google Messages does support E2EE, but only over RCS and only if all participants in the chat are using a recent version of Google Messages. But the second screenshot in the Play Store listing flatly declares “Conversations are end-to-end encrypted”, full stop. That is some serious bullshit.
I realize that “Some conversations are end-to-end encrypted” will naturally spur curiosity regarding which conversations are encrypted and which aren’t, but that’s the truth. And users of the app should be aware of that. “RCS conversations with other Google Messages users are encrypted” would work.
Then, in the “report card” section of the listing, it states the following:
Data is encrypted in transit
Your data is transferred over a secure connection
Which, again, is only true sometimes. It’s downright fraudulent to describe Google Messages’s transit security this way. Imagine a typical Android user without technical expertise who takes the advice (now coming from the FBI) to use end-to-end encryption for their messaging. A reasonable person who trusts Google would look at Google’s own description of Google Messages and conclude that if you use Google Messages, all your messages will be secure. That’s false. And depending who you communicate with — iPhone users, Android users with old devices, Android users who use other text messaging apps — it’s quite likely most of your messages won’t be secure.
Just be honest! The E2EE between Google Messages users using Android phones that support RCS is completely seamless and automatic (I just tried it myself using my Android burner), but E2EE is never available for SMS, and never available if a participant in the chat is using any RCS client (on Android or Apple Messages) other than Google Messages. That’s an essential distinction that should be made clear, not obfuscated.
While I’m at it, it’s also embarrassing that Google Voice has no support for RCS at all. It’s Google’s own app and service, and Google has been the world’s most vocal proponent of RCS messaging.
Lastly, I also think it’s a bad idea that Google Messages colors all RCS message bubbles with the exact same colors (dark blue bubbles with white text, natch). SMS messages, at least on my Pixel 4, are pale blue with black text. Google Messages does put a tiny lock in the timeline to indicate when an RCS chat is secure, and they also put a lock badge on the Send button’s paper airplane icon, so there are visual indications whether an RCS chat is encrypted, but because the messages bubble colors are the same for all RCS chats, it’s subtle, not instantly obvious like it is with Apple Messages, where green means “SMS or RCS, never encrypted” and blue means “iMessage, always encrypted”.
Kevin Collier, reporting for NBC News:
Amid an unprecedented cyberattack on telecommunications companies such as AT&T and Verizon, U.S. officials have recommended that Americans use encrypted messaging apps to ensure their communications stay hidden from foreign hackers.
The hacking campaign, nicknamed Salt Typhoon by Microsoft, is one of the largest intelligence compromises in U.S. history, and it has not yet been fully remediated. Officials on a news call Tuesday refused to set a timetable for declaring the country’s telecommunications systems free of interlopers. Officials had told NBC News that China hacked AT&T, Verizon and Lumen Technologies to spy on customers.
A spokesperson for the Chinese Embassy in Washington did not immediately respond to a request for comment.
In the call Tuesday, two officials — a senior FBI official who asked not to be named and Jeff Greene, executive assistant director for cybersecurity at the Cybersecurity and Infrastructure Security Agency — both recommended using encrypted messaging apps to Americans who want to minimize the chances of China’s intercepting their communications.
“Our suggestion, what we have told folks internally, is not new here: Encryption is your friend, whether it’s on text messaging or if you have the capacity to use encrypted voice communication. Even if the adversary is able to intercept the data, if it is encrypted, it will make it impossible,” Greene said.
It seems kind of new for the FBI to call encryption “our friend”, but now that I think about it, their beef over the years has primarily been about gaining access to locked devices, not eavesdropping on communication protocols. Their advocacy stance on device encryption has not changed — they still want a “back door for good guys” there. Their thinking, I think, is that E2EE communications are a good thing because they protect against remote eavesdropping from foreign adversaries — exactly like this campaign waged by China. The FBI doesn’t need to intercept communications over the wire. When the FBI wants to see someone’s communications, they get a warrant to seize their devices. That’s why the FBI wants device back doors, but are now encouraging the use of protocols that are truly E2EE. But that’s not to say that law enforcement agencies worldwide don’t still fantasize about mandatory “back doors for good guys”.
Here’s a clunker of a paragraph from this NBC News story, though:
Privacy advocates have long advocated using end-to-end encrypted apps. Signal and WhatsApp automatically implement end-to-end encryption in both calls and messages. Google Messages and iMessage also can encrypt calls and texts end to end.
It’s true that both voice and text communications over Signal and WhatsApp are always secured with end-to-end encryption. But Google Messages is an Android app that only handles text messaging via SMS and RCS, not voice. There’s a “Call” button in Google Messages but that just dials the contact using the Phone app — just a plain old-fashioned unencrypted phone call. (There’s a Video Call button in Google Messages, but that button tries to launch Google Meet.) Some text chats in Google Messages are encrypted, but only those using RCS in which all participants are using a recent version of Google Messages. Google Messages does provide visual indicators of the encryption status of a chat. The RCS standard has no encryption; E2EE RCS chats in Google Messages use Google’s proprietary extension and are exclusive to the Google Messages app, so RCS chats between Google Messages and other apps, most conspicuously Apple Messages, are not encrypted.
iMessage is not an app. It is Apple’s proprietary protocol, available within its Messages app. The entire iMessage protocol was built upon end-to-end encryption — all iMessage messages have been E2EE from the start. Apple also offers FaceTime for voice and video calls, and FaceTime calls are always secured by E2EE.
A few nuggets of wisdom from Andy Grove, in an interview with Esquire after he retired as Intel’s CEO, but still served as chairman:
Profits are the lifeblood of enterprise. Don’t let anyone tell you different.
You must understand your mistakes. Study the hell out of them. You’re not going to have the chance of making the same mistake again — you can’t step into the river again at the same place and the same time — but you will have the chance of making a similar mistake.
Status is a very dangerous thing. I’ve met too many people who make it a point of pride that they never take money out of a cash machine, people who are too good to have their own e-mail address, because that’s for everybody else but not them. It’s hard to fight the temptation to set yourself apart from the rest of the world.
Grove, still serving as CEO during Intel’s zenith in 1997, didn’t even have an office. He worked out of an 8x9-foot cubicle.
What you’re seeing today is a very, very rapid evolution of an industry where the milieu is better understood by people who grew up in the same time frame as the industry. A lot of the years that many of us have spent in business before this time are of only limited relevance.
This industry is not like any other. Computers don’t get incrementally more powerful; they get exponentially more powerful.
The Verge’s Sean Hollister penned an excellent high-level summary of Pat Gelsinger’s ignominious ouster from Intel, under the headline “What Happened to Intel?” A wee bit of pussyfooting here, though, caught my eye:
Just how bad was it before Gelsinger took the top job?
Not great! There were bad bets, multiple generations of delayed chips, quality assurance issues, and then Apple decided to abandon Intel in favor of its homegrown Arm-based chips — which turned out to be good, seriously showing up Intel in the laptop performance and battery life realms. We wrote all about it in “The summer Intel fell behind.”
Intel had earlier misses, too: the company long regretted its decision not to put Intel inside the iPhone, and it failed to execute on phone chips for Android handsets as well. It arguably missed the boat on the entire mobile revolution.
There’s no argument about it. Intel completely missed mobile. iPhones never used Intel chips and Apple Silicon chips are all fabbed by TSMC. Apple’s chips are the best in the industry, also without argument, and the only mobile chips that can be seen as reasonable competition are from Qualcomm (and maybe Samsung). Intel has never been a player in that game, and it’s a game Intel needed not only to be a player in, but to dominate.
It’s not just that smartphones are now a bigger industry than the PC industry ever was, and that Intel has missed out on becoming a dominant supplier to phone makers. That’s bad, but it’s not the worst of it. It’s that those ARM-based mobile chips — Apple Silicon and Qualcomm’s Snapdragon lineup — got so good that they’re now taking over large swaths of the high end of the PC market. Partly from an obsessive focus on performance-per-watt efficiency, partly from the inherent advantages of ARM’s architecture, partly from engineering talent and strategy, and partly from the profound benefits of economies of scale as the mobile market exploded. Apple, as we all know, moved the entire Mac platform from Intel chips to Apple Silicon starting in 2020. The Mac “only” has 15 percent of the worldwide PC market, but the entirety of the Mac’s market share is at the premium end of the market. Losing the Mac was a huge loss for Intel. And now Qualcomm and Microsoft are pushing Windows laptops to ARM chips too, for the same reasons: not just performance-per-watt, but sheer performance. x86 CPUs are still dominant on gaming PCs, but even there, AMD is considered the cream of the crop.
Of all companies, Intel should have seen the potential for this to happen. Intel did not take “phone chips” seriously, but within a decade, those ostensibly toy “phone chips” were the best CPUs in the world for premium PC laptops, and their efficiency advantages make them advantageous in data centers too. And Apple has shown that they’re even superior for workstation-class desktops. That’s exactly how Intel became Intel back at the outset of the personal computing revolution. PCs were seen as mere toys by the “real” computer makers of the 1970s and early 1980s. IBM was caught so flatfooted that when they saw the need to enter the PC market, they went to Intel for the chips and Microsoft for DOS — decisions that both Intel and Microsoft capitalized upon, resulting in a tag-team hardware/software dominance of the entire computing industry that lasted a full quarter century, while IBM was left sidelined as just another maker of PCs. From Intel’s perspective, the x86 platform went from being a “toy” to being the dominant architecture for everything from cheap laptops all the way up to data-center-class servers.
ARM-based “phone chips” did the same thing to x86 that Intel’s x86 “PC chips” had done, decades earlier, to mainframes. Likewise, Nvidia turned “graphics cards for video game enthusiasts” — also once considered mere toys — into what is now, depending on stock market fluctuations, the most valuable company in the world. They’re neck and neck with the other company that pantsed Intel for silicon design leadership: Apple. Creating “the world’s best chips” remains an incredible, almost unfathomably profitable place to be as a business. Apple and Nvidia can both say that about the very different segments of the market in which their chips dominate. Intel can’t say that today about any of the segments for which it produces chips. TSMC, the company that fabs all chips for Apple Silicon and most of Nvidia’s leading chips, is 9th on the list of companies ranked by market cap, with a spot in the top 10 that Intel used to occupy. Today, Intel is 180th — and on a trajectory to fall out of the top 200.
Intel never should have been blithe to the threat. The company’s longtime CEO and chairman (and employee #3) Andy Grove titled his autobiography Only the Paranoid Survive. The full passage from which he drew the title:
Business success contains the seeds of its own destruction. Success breeds complacency. Complacency breeds failure. Only the paranoid survive.
Grove retired as CEO in 1998 and as chairman in 2005. It’s as though no one at Intel after him listened to a word he said. Grove’s words don’t read merely as advice — they read today as a postmortem synopsis for Intel’s own precipitous decline over the last 20 years. ★
Nilay Patel:
So many of you like The Verge that we’ve actually gotten a shocking number of notes from people asking how they can pay to support our work. It’s no secret that lots of great websites and publications have gone under over the past few years as the open web falls apart, and it’s clear that directly supporting the creators you love is a big part of how everyone gets to stay working on the modern internet.
At the same time, we didn’t want to simply paywall the entire site — it’s a tragedy that traditional journalism is retreating behind paywalls while nonsense spreads across platforms for free. We also think our big, popular homepage is a resource worth investing in. So we’re rethinking The Verge in a freemium model: our homepage, core news posts, Decoder interview transcripts, Quick Posts, Storystreams, and live blogs will remain free. We know so many of you depend on us to curate the news every day, and we’re going to stay focused on making a great homepage that’s worth checking out regularly, whether you pay us or not.
Our original reporting, reviews, and features will be behind a dynamic metered paywall — many of you will never hit the paywall, but if you read us a lot, we’ll ask you to pay.
This sounds like an extremely well-considered balance between keeping much of the site open to all, allowing metered access to a limited number of premium articles free of charge, and creating a new sustainable revenue stream from subscribers. Bravo.
Count me in as a day one subscriber.
Christopher Mims, writing for The Wall Street Journal (News+):
The company’s core business is under siege. People are increasingly getting answers from artificial intelligence. Younger generations are using other platforms to gather information. And the quality of the results delivered by its search engine is deteriorating as the web is flooded with AI-generated content. Taken together, these forces could lead to long-term decline in Google search traffic, and the outsize profits generated from it, which prop up its parent company Alphabet’s money-losing bets on things like its Waymo self-driving unit.
The first danger facing Google is clear and present: When people want to search for information or go shopping on the internet, they are shifting to Google’s competitors, and advertising dollars are following them. In 2025, eMarketer projects, Google’s share of the U.S. search-advertising market will fall below 50% for the first time since the company began tracking it.
The accompanying chart (“Estimated share of U.S. search advertising revenue”) suggests Google’s decline has been Amazon’s gain. Basically, Google may still dominate the market for general web search, but people more and more are searching using apps and services that aren’t (or aren’t only) general web search engines. And the reason why is that Google web search has gotten worse.
Special guest Allen Pike joins the show to talk about the state of generative AI and how Apple Intelligence measures up (so far). Also: some speculation on Apple’s pending acquisition of the ever-difficult-to-pronounce Pixelmator.
Sponsored by:
Amazon is running a holiday discount on M3 MacBook Airs, but it’s tricky — you need to click around through various color choices and watch the prices and ship dates. My main link on this post goes to the config that looks like their best deal for price-conscious gift buyers: the 13-inch M3 MacBook Air in space gray, with 24 GB RAM and 512 GB of storage for $1,299, a $200 discount from the list price, with delivery in a few days. They’ve also got the same configuration, at the same price, with the same delivery window in silver. Starlight only has “5 remaining in stock” (and that was at 8 just a few minutes ago, so they’ll likely be gone by the time you read this), and midnight is already out of stock.
The 13-inch configuration with 16 GB RAM and 512 GB storage is just $1,099, but delivery dates are in early January. They’ve got the configuration with 16 GB RAM and 256 GB storage for just $899, but only in midnight and starlight, and with delivery windows of “1 to 2 months”.
The best option for 15-inch M3 MacBook Airs is the configuration with 24 GB RAM and 512 GB storage for $1,424 — a $275 discount from the regular price of $1,699. That’s available at that price, with next-week delivery, in all four colors. They’ve also got $200 discounts on various configurations with 16 GB RAM, but delivery on those models is out in January.
Needless to say, all of these links are using my make-me-rich affiliate code. And Amazon still has USB-C AirPods Pro 2 for just $154, almost $100 off the regular price.
Ian King, Liana Baker, and Ryan Gould, reporting for Bloomberg:*
Intel Corp. Chief Executive Officer Pat Gelsinger was forced out after the board lost confidence in his plans to turn around the iconic chipmaker, adding to turmoil at one of the pioneers of the technology industry.
The clash came to a head last week when Gelsinger met with the board about the company’s progress on winning back market share and narrowing the gap with Nvidia Corp., according to people familiar with the matter. He was given the option to retire or be removed, and chose to announce the end of his career at Intel, said the people, who declined to be identified discussing proceedings that were not made public.
Intel Chief Financial Officer David Zinsner and Michelle Johnston Holthaus are serving as interim co-CEOs while the board searches for Gelsinger’s replacement, the company said in a statement. Frank Yeary, independent chair of the board of Intel, will serve as interim executive chair.
See also: Techmeme’s roundup.
* Bloomberg, of course, is the publication that published “The Big Hack” in October 2018 — a sensational story alleging that data centers of Apple, Amazon, and dozens of other companies were compromised by China’s intelligence services. The story presented no confirmable evidence at all, was vehemently denied by all companies involved, has not been confirmed by a single other publication (despite much effort to do so), and has been largely discredited by one of Bloomberg’s own sources. By all appearances “The Big Hack” was complete bullshit. Yet Bloomberg has issued no correction or retraction, and their only ostensibly substantial follow-up contained not one shred of evidence to back up their allegations. Bloomberg seemingly hopes we’ll all just forget about it. I say we do not just forget about it. Everything they publish should be treated with skepticism until they retract “The Big Hack” or provide evidence that any of it was true.
My thanks to Crunchy Bagel — the company of developer Quentin Zervaas — for sponsoring this week’s DF RSS feed to promote Streaks, their excellent app for iPhone and Apple Watch. Streaks is a to-do list that helps you form good habits. The point is to motivate you to tackle the things you want to do: anything from daily exercise goals, learning a new language, taking your vitamins, or quitting a bad habit. Anything. I’ve brushed my teeth daily since I was a child but I’ve never been good about flossing — until, generally, a few days before a scheduled dental cleaning. I’ve been using Streaks lately to groove a daily flossing habit. (I expect a pat on the back the next time I’m at the dentist.)
Streaks first sponsored DF back in 2016 and everything I wrote about it then remains true today. It’s a brilliant design, both visually and conceptually. I’ve tried a few apps like this over the years — including a few new ones in recent years — and what kills most of them is friction. If it takes too many fiddly steps to mark off the things you do, you stop using the app. Streaks makes it incredibly simple and fast to mark things done. For anything activity-related, you don’t have to do anything at all — it just tracks information from HealthKit (with your permission, of course) automatically. And in terms of the visual design, Streaks is both highly distinctive and very iOS-y — it doesn’t look like a stock iOS app, but it very much looks and feels like a good native iOS app. That’s a combination that takes a great eye to pull off. (Unsurprisingly, Streaks won an Apple Design Award a few years ago, and has often been featured by Apple in the App Store.)
iOS has not been standing still over the last 8 years and neither has Zervaas. Streaks supports all the latest stuff you’d hope for in an iOS app, including interactive widgets. Streaks’s interactive widgets reduce even further the friction of marking things done — interactive widgets were practically made for apps like Streaks. Streaks also has a great Apple Watch companion app.
I only accept sponsorships for products or services that I’m proud to support. But Streaks is so good that I want to go out of my way to draw attention to it (again). I’m not praising it with superlatives because it’s my sponsor; I’m doing so because it’s superlatively good. It’s a one-time purchase, and the latest update has added seasonal themes, just in time for Christmas (and your New Year’s resolutions).
If you have any sort of interest in an app to help reinforce daily habits (or an interest in great UI design), go check Streaks out.
If you have young children, be sure to also try Little Streaks. It’s a great way to help kids focus on routines: meal time, bedtime, learning to ride a bike, brushing their teeth (and flossing!) — anything. Little Streaks is free for one routine, or use code “DARING” for 50% off the first year of a subscription for unlimited routines.
Cal Paterson:
Large language models (LLMs) like Chat-GPT and Claude.ai are whizzy and cool. A lot of people think that they are going to be The Future. Maybe they are — but that doesn’t mean that building them is going to be a profitable business.
In the 1960s, airlines were The Future. That is why old films have so many swish shots of airports in them. Airlines though, turned out to be an unavoidably rubbish business. I’ve flown on loads of airlines that have gone bust: Monarch, WOW Air, Thomas Cook, Flybmi, Zoom. And those are all busts from before coronavirus - times change but being an airline is always a bad idea.
That’s odd, because other businesses, even ones which seem really stupid, are much more profitable. Selling fizzy drinks is, surprisingly, an amazing business. Perhaps the best. Coca-Cola’s return on equity has rarely fallen below 30% in any given year. That seems very unfair because being an airline is hard work but making Coke is pretty easy. It’s even more galling because Coca-Cola don’t actually make the Coke themselves - that is outsourced to “bottling companies”. They literally just sell it.
This is such a crackerjack essay. Clear, concise, and uncomplicated. I find it hard to argue with. I’ve repeatedly mentioned an internal paper that leaked out of Google last year, titled “We Have No Moat, and Neither Does OpenAI”. The fact that OpenAI has lobbied for stringent AI regulation around the globe suggests that they fear this too — their encouragement of regulation could be explained by seeking a regulatory moat because there is no technical or business model moat to be had.
Paterson, expounding on his comparison to the airline industry, observes that commercial airlines have only two suppliers: Boeing and Airbus. He continues:
LLM makers sometimes imply that their suppliers are cloud companies like Amazon Web Services, Google Cloud, etc. That wouldn’t be so bad because you could shop around and make them compete to cut the huge cost of model training.
Really though, LLM makers have only one true supplier: NVIDIA. NVIDIA make the chips that all models are trained on — regardless of cloud vendor. And that gives NVIDIA colossal, near total pricing power. NVIDIA are more powerful relative to Anthropic or OpenAI than Airbus or Boeing could ever dream of being.
At this moment, there are three companies in the world with market caps in excess of $3 trillion: Apple, Nvidia, and Microsoft. There are only two more with market caps in excess of $2 trillion: Amazon and Google. Engineering, training, and providing LLMs isn’t the business with a moat. The business with a moat is making the cutting-edge computer hardware that trains LLMs, and that belongs to Nvidia.
I have more to say about Paterson’s essay, but I really just want you to read it for now.
Kind of wild that this entire sub-site is still standing on Apple.com, including working video. (Fingers crossed that my linking to it doesn’t bring it to the attention of someone who decides to 404 it.)
From Nathan Edwards’s 6/10 review of the M4 iMac for The Verge:
I also do not love that the stand has no height adjustment, and you can’t swap it for a more ergonomic option without buying an entirely different computer. Apple sells a version of the iMac with a VESA mount, but it doesn’t come with a stand at all, and most height-adjustable VESA mounts are not as pretty as the iMac. The Studio Display has a height-adjustable stand option, so we know Apple can make one it’s willing to put out into the world. It just hasn’t done so here. But whatever. I have hardcover books. It’s fine.
It wasn’t Edwards, but Nilay Patel, who reviewed the Studio Display for The Verge, but in that review the $1,600 cost — which called out the $400 surcharge for the optional adjustable stand — was one of the three bullet items under “The Bad”. So it’s not hard to guess that if the M4 iMac had an optional adjustable stand, it would still be listed a con, because surely that option, from Apple, would cost at least $300.
(I’ve used a Studio Display with the pricey options for nano-texture and adjustable height ever since it came out, and consider both options well worth the cost.)
But the weird thing about Edwards’s review is that the whole thing is predicated on his not seeing the appeal of an all-in-one computer. I feel the same way, personally. My primary computer is a MacBook Pro that I connect, lid-closed, to the aforementioned-in-parenthetical-aside Studio Display most of the time. If I were to buy a dedicated desktop Mac I’d get either a Mac Mini or Mac Studio and connect that to a Studio Display. But the iMac is obviously intended for people who want an all-in-one.
It makes for a very strange, dare I say pointless, review. It’s like a bicycle review from someone who admits that they only ever walk or drive a car and don’t see why anyone else doesn’t walk or drive everywhere. In theory, someone who doesn’t care for genre X can write a review of something from genre X, and their dislike of the genre might provide a unique perspective. (David Foster Wallace wrote a masterpiece of the genre with the title essay in A Supposedly Fun Thing I’ll Never Do Again regarding a weeklong Caribbean cruise.) But the review still needs to gauge the product accordingly, for what it is. Does anyone make a better all-in-one PC than the iMac? If so, who? If not, why is this a 6/10?
Holiday shopping bundle of 13 excellent Mac Apps, with two ways to buy. Get the whole bundle of 13 apps for $74 (a 76 percent discount from the combined regular prices), or, pick and choose a la carte and buy apps at 50 percent off.
Included in the promotion is Stairways Software’s astonishingly powerful and useful Keyboard Maestro, which almost never goes on sale. There are many longstanding Mac apps and utilities that I enjoy, appreciate, and recommend. There are very few that I can say I’d feel lost without. Keyboard Maestro is one of those.
Other apps in the Space/Time bundle that I use: TextSniper (instantly OCR any text you see on screen), DaisyDisk (disk space visualizer/cleanup), CleanShot X (advanced screenshot utility), and Bartender (menu bar item manager).
Fun interaction design treatise from George Cave.
Happy Thanksgiving, everyone.
Borderline incredible discount on AirPods Pro 2 at Amazon. This is just short of $100 off the retail list price of $249. (Buy through this link and I’ll get rich on the affiliate commission.)
John Siracusa, in his inimitable style, reviewed Delicious Library 1.0 upon its release, 20 years ago this month:
Part of what makes the Mac community so special is that so many Mac developers have itches — and, more importantly, corresponding talents — that have little or nothing to do with computers. I invite you to look again at some of the screenshots and artwork in this application. Someone loved those graphics. Someone sweated over every pixel of that application window. Someone knows what it means to be a lover of art, music, books, video games. This is in addition to (not instead of) the ability to write great code.
All of these human facilities and experiences have been harnessed to create not just a mere “program”, “application” or (God forbid) “executable”, but a digital love letter to collectors. Delicious Monster, from its products to its web site, exudes a spirit of passion and fun. “I’ve never been happier at work”, Wil Shipley told me in an email. “I think it shows in the finished product.”
I think so too. It may only be version 1.0, but it’s delicious.
Re-reading this review — which I first linked to, with little comment, upon publication — reminded me of several things. First, Siracusa is one of the few writers I’ve ever felt competitive with in this racket. This whole thing is so fucking good, and touches upon so many subtle points that are so hard to convey in words. (In some ways it’s better to read in its original multi-page layout, via Internet Archive, but those archived versions are inexplicably missing some, but not all, of the screenshots, and for a review of an app as visually ambitious as Delicious Library, the screenshots are essential. But the current Ars Technica version of the review, although it has all the inline images, is missing this “larger version” of Delicious Library’s main window. Open the version I’m hosting in a tab for reference. Note too that “larger version” meant something different 20 years ago — it’s only 183 KB, but is the largest image in the review.)
Second, I had forgotten just how ambitious Delicious Library 1.0 was, right out of the gate. I remembered that Delicious Library eventually supported barcode scanning via webcams, but that feature was in fact present in version 1.0. It worked incredibly well. And the feature was so far ahead of its time. In 2004, no Mac had yet shipped with a built-in camera. Instead, we all bought Apple’s standalone $150 iSight camera, which connected via FireWire. (What a gorgeous device.) By the end of his effusive review, Siracusa (unsurprisingly) has a wishlist of additional features, but what was in Delicious Library 1.0 comprised far more than a “minimal viable product”. It exemplified Apple’s — and Steve Jobs’s — own ethos of debuting with a bang, right out of the gate. It made you say “Wow!” And then you’d think, “Oh, but it’d be cool if it...” and, it turns out, it did that too.
Delicious indeed.
Wil Shipley, on Mastodon:
Amazon has shut off the feed that allowed Delicious Library to look up items, unfortunately limiting the app to what users already have (or enter manually).
I wasn’t contacted about this.
I’ve pulled it from the Mac App Store and shut down the website so nobody accidentally buys a non-functional app.
The end of an era, but it’s kind of surprising it was still functional until now. (Shipley has been a full-time engineer at Apple for three years now.)
It’s hard to describe just what a sensation Delicious Library was when it debuted, and how influential it was. Delicious Library was simultaneously very useful, in very practical ways, and obsessed with its exuberant UI in ways that served no purpose other than looking cool as shit. It was an app that demanded to be praised just for the way it looked, but also served a purpose that resonated with many users. For about a decade it seemed as though most popular new apps would be designed like Delicious Library. Then Apple dropped iOS 7 in 2013, and now, no apps look like this. Whatever it is that we, as an industry, have lost in the now decade-long trend of iOS 7-style flat design, Delicious Library epitomized it.
They were even clever and innovative in the ways they promoted the app. The first time Delicious Monster sponsored Daring Fireball for a week, their sponsorship message read, in its entirety:
Organize the shit you like.
Get rid of the shit you don’t.
Delicious Library 2.
When they created an iPhone version of Delicious Library, they announced it via this delightfully intricate but decidedly lo-fi stop-motion-animated video.
20 years go by and there’s some inevitable nostalgia looking back at any art form. But man, Delicious Library exemplified an era of indie app development that, sadly, is largely over. And make no bones about it: Delicious Library was a creative work of art.
The Onion Editorial Board:
All great journalists, and even those lesser journalists who don’t work for The Onion, eventually ponder why we do what we do. Is the point of reporting to illuminate the world around us, so that we may make meaning of it? Or is it to cause people in minority groups to question their humanity and persuade others to demonize them? We know where we stand, proudly dreaming of genitals.
Research shows that trans people are over four times more likely than cisgender people to be the victim of a violent crime. We salute our colleagues across the media who are working tirelessly to make that number even higher.
This was published in 2023, but seems particularly apt post-election.
Erin Woo, Sahil Patel, and Amir Efrati, reporting for The Information (paywalled, alas):
OpenAI is preparing to launch a frontal assault on Google. The ChatGPT owner recently considered developing a web browser that it would combine with its chatbot, and it has separately discussed or struck deals to power search features for travel, food, real estate and retail websites, according to people who have seen prototypes or designs of the products. [...]
Making a web browser could help OpenAI have more control over a primary gateway through which people use the web, as well as further boost ChatGPT, which has more than 300 million weekly users just two years after its launch. It isn’t clear how a ChatGPT browser’s features would differ from those of other browsers.
In a signal of its interest in a browser, several months ago OpenAI hired Ben Goodger, a founding member of the Chrome team at Google. Another recent hire is Darin Fisher, who worked with Goodger to develop Chrome.
But OpenAI isn’t remotely close to launching a browser, multiple people said.
Goodger and Fisher’s hirings weren’t secret — both keep up-to-date profiles on LinkedIn — and just because two people have previously created new web browsers (even multiple times) that their new gig is creating a new web browser. But it sure feels like a good guess.
Fisher most recently was at The Browser Company for two years, working on Arc, an innovative browser that I admire for its originality but which simply did not click for me at all. The Browser Company is in flux, too, working both on Arc 2.0 and an as-yet-unnamed second project that might be a more traditional web browser.
Combine this with regulatory pressure on Apple’s Safari and especially Google’s Chrome, and it’s an exciting time for web browsers. It’s kind of wild how every few years the web browser market gets shaken up. The pattern that’s repeated several times is that just when the browser market seems settled — like the markets for, say, spreadsheets and word processors — there’s a period of flux and new entries shake up the market. There was a point when it seemed like Internet Explorer would be dominant forever; today it doesn’t even exist. There was a point when Firefox seemed entrenched on Windows; today it’s an afterthought. Today Chrome seems entrenched, as dominant as IE once was. Maybe not?
Mark Gurman, in his weekly Power On column:
The best scenario for Apple in TV hardware would be a cheap stick (perhaps with no physical remote — use your iPhone instead). It’s an idea that Apple marketing executives detest, but it would help the company quickly expand its presence. If consumers want more power and storage, they can opt for the current box.
At the top of the line, Apple could offer something like the new Mac mini, providing the best streaming quality and gaming options. For this exercise, let’s call these three tiers the Apple TV SE, Apple TV and Apple TV Max. It would use the same “good, better, best” strategy employed by the iPhone, Mac, iPad, AirPods, Apple Watch and even the Apple Pencil.
Neither of these suggestions makes any sense. The only interesting thing about either idea is trying to decide which one is worse.
Streaming sticks are crap, and Apple doesn’t make crap. I also think streaming sticks are fast going the way of the dodo — they were a stopgap low-cost solution for when TV sets didn’t have “smart” experiences with built-in integration for major streaming platforms. Those built-in integrations obviate the need for streaming sticks, and Apple TV is now built into TVs from all major brands, including Samsung, Sony, LG, and Vizio. That’s the Apple TV app, not the full Apple TV tvOS platform, but that serves Apple’s needs. I don’t think it’s possible to provide a full-fidelity tvOS experience via a stick-sized computer that draws power from an HDMI port, and it’s certainly not possible to do so by omitting the goddamn remote control. Arguing that Apple needs to or even ought to build a cheap TV stick today is like those dumb columns from 2009 arguing that Apple needed to make a netbook to compete against shitty $300 laptops. Apple TV is to set top boxes as the Mac is to PCs — it’s never going to get a large share of the overall market, but it dominates the high-end of the market catering to people who actually care.
As for Gurman’s high-end hardware idea, a Mac Mini starts at $600. What would be the point of connecting such hardware to your TV? A Mac Mini wouldn’t offer better streaming quality than the existing Apple TV 4K offers. 4K is 4K, and even older Apple TV hardware streams it perfectly. And while in theory an M4-powered Mac-Mini-caliber Apple TV could offer better gaming than the iPhone-13-era A15 Bionic chip in the current Apple TV 4K hardware, there are zero tvOS games today that target hardware like that, and there’d be little reason for game developers to target such an “Apple TV Pro” device because almost no one would buy one. Whatever the reasons are for gaming not being a big deal on tvOS today, the lack of a “pro” $500 or $600 hardware tier is not one of them.
I think Apple should get the entry price down to $99 (currently $129), and sooner or later they need to update the hardware, if only to support Apple Intelligence. (Perhaps to the A18 or A18 Pro next fall — the current A15 Bionic Apple TV 4K models came out one year after the chip debuted in the iPhones 13.) But the hardware story for Apple TV is fine.
Zac Hall, writing at 9to5Mac back in May 2023:
Now that Final Cut Pro and Logic Pro for iPad are official, let’s talk about pricing. These apps coming out on a random day in May is surprising. Subscription pricing? Not so much. Nevertheless, pricing for these long overdue apps is interesting when you consider their Mac counterparts and the Apple One bundle.
First, let’s address the Mac apps.
How would Apple price Final Cut Pro and Logic Pro for Mac if they were released today? In the era of service revenue, Apple would almost certainly charge a subscription fee for access rather than a one-time fee.
Mac users have had years of free updates to Logic and Final Cut Pro after paying once for each app. In fact, Logic Pro X will be a decade old in July, and Final Cut Pro X turns 12 next month. The price of Logic Pro for Mac today ($199.99) is the same as four years of subscribing to Logic Pro for iPad, and Final Cut Pro for Mac ($299.99) will equal six years of paying for the iPad version.
The iPad versions of Final Cut Pro and Logic Pro are both priced the same: $5/month or $50/year. There is no bundle to get both at a discount.
I was a little surprised when Apple announced Final Cut Pro 11 for Mac two weeks ago and didn’t announce a switch to subscription pricing. Instead, it remains a $300 one-time purchase, and for existing users version 11 is a free upgrade. Whether you like it or not, subscription pricing is no longer the future, it’s the present, and it’s the dominant model for professional creative tools today.
Adobe made this switch years ago, with a particular emphasis on the Creative Cloud bundle that includes their entire suite of apps — Photoshop, Lightroom, Illustrator, InDesign, Premiere Pro, Audition, Acrobat Pro, and more. You get access to Adobe’s entire suite for $90/month, or $60/month if you pay annually ($720/year). They currently offer a first-year 50 percent discount if you pay annually. A la carte, subscriptions to each app cost $20–$23/month, so the Creative Cloud bundle is a good deal if you use three of them, and a great deal if you use more than three.
Apple clearly understands the appeal of subscription bundles too, with Apple One. Despite the fact that Apple didn’t switch to subscription pricing for Final Cut Pro 11 for Mac, I still expect them to sooner rather than later, and if they do, I further expect a bundle. Apple is never going to offer a swath of creative tools as broad as Adobe’s, but the biggest missing pieces right now would be alternatives to Photoshop and Lightroom. My gut feeling is that’s why they acquired Pixelmator and Photomator. They could sell a bundle for, just spitballing here, $20/month or $200/year that would include the Mac and iPad versions of Final Cut Pro, Logic Pro, Pixelmator, and possibly Photomator. Maybe throw in some extra iCloud storage.