By John Gruber
Kolide ensures only secure devices can access your cloud apps. Watch the demo to see how it works.
Jason Snell:
Castro has been a popular iOS podcast app for many years, but right now things look grim.
The cloud database that backs the service is broken and needs to be replaced. As a result, the app has broken. (You can’t even export subscriptions out of it, because even that function apparently relies on the cloud database.) “The team is in the progress of setting up a database replacement, which might take some time. We aim to have this completed ASAP,” said an Xtweet from @CastroPodcasts.
What’s worse, according to former Castro team member Mohit Mamoria, “Castro is being shut down over the next two months.”
I always appreciated Castro — it’s a well-designed, well-made app that embraced iOS design idioms. But as a user it just never quite fit my mental model for how a podcast client should work, in the way that Overcast does. I wanted to like Castro more than I actually liked it.
As a publisher, Castro was the 4th or 5th most popular client for The Talk Show for a while, but in recent years has slipped. Right now it’s 10th — but in a logarithmic curve. Overcast remains 1st; Apple Podcasts 2nd. The truth is, if not for Overcast, Castro would likely be in that top position, not shutting down. But Overcast does exist, and it’s the app where most people with exquisite taste in UI are listening to podcasts. There aren’t many markets where listeners of The Talk Show are in the core demographic, but iOS podcast apps are one. I can’t say why or precisely when, but somewhere along the line Castro lost its mojo.
I salute everyone who’s worked on it, though, because it really is a splendid app.
Jason Snell, writing at Six Colors:
Last month I wrote about how Apple’s cascade of macOS alerts and warnings ruin the Mac upgrade experience. [...]
This issue was brought home to me last week when I was reviewing the M3 iMac and the M3 MacBook Pro. As a part of reviewing those computers, I used Migration Assistant to move a backup of my Mac Studio to the new systems via a USB drive. Sometimes I try to review a computer with nothing migrated over, but it can be a real slowdown and I didn’t really have any time to spare last week.
Anyway, by migrating, I got to (twice) experience Apple’s ideal process of moving every user from one Mac to the next. You start up your new computer, migrate from a backup of the old computer, and then start using the new one. There’s a lot that’s great about this process, and it’s so much better than what we used to have to do to move files over from one Mac to another.
And yet all of Apple’s security alerts got in the way again and spoiled the whole thing. Here’s a screenshot I took right after my new Mac booted for the first time after migration.
I went through the exact same thing. Except if I had taken a screenshot of all the security-permission alerts I had to go though, there would have been more of them — and Snell’s screenshot looks like a parody. Back in the heyday of the “Get a Mac” TV ad campaign, Apple justifiably lambasted Windows Vista for its security prompts, but that’s exactly the experience you get after running Migration Assistant on a Mac today. It’s terrible.
Don’t get me wrong: Migration Assistant is borderline miraculous. It’s a wonderful tool that seemingly just keeps getting better. But MacOS itself stores too many security/privacy settings in a way that are tied to the device, not your user account. There ought to be some way to OK all these things in one fell swoop.
As Snell says, setting up a new Mac should be a joy, not a chore. Migration Assistant takes care of so much, but these cursed security prompts spoil the experience.
Kyle Melnick, reporting last week for The Washington Post under the headline “A Toddler Was Taken in a Carjacking; VW Wanted $150 for GPS Coordinates, Lawsuit Says”:
Shepherd, who was four months pregnant, tried to fight off the man. But she was thrown to the pavement and run over by her own car as the man drove away with Isaiah in the back seat, authorities said. Shepherd thought she might never see her son again.
After Shepherd frantically called 911, investigators contacted Volkswagen’s Car-Net service, which can track the location of the manufacturer’s vehicles. They hoped to locate Isaiah.
But a customer service representative said that wouldn’t be possible because Shepherd’s subscription to the satellite service had expired, according to a new lawsuit. The employee said he couldn’t help until a $150 payment was made, the complaint said.
This perfectly illustrates the perils of Apple eventually charging for Emergency SOS satellite service. If Apple someday cuts off free service for compatible iPhones, eventually there’s going to be someone who dies because they chose not to pay to continue service. No one wants that.
Apple Newsroom, two weeks ago:
One year ago today, Apple’s groundbreaking safety service Emergency SOS via satellite became available on all iPhone 14 models in the U.S. and Canada. Now also available on the iPhone 15 lineup in 16 countries and regions, this innovative technology — which enables users to text with emergency services while outside of cellular and Wi-Fi coverage — has already made a significant impact, contributing to many lives being saved. Apple today announced it is extending free access to Emergency SOS via satellite for an additional year for existing iPhone 14 users.
My hunch on this is that Apple would like to make this available free of charge in perpetuity, but wasn’t sure how much it would actually get used, and thus how much it would actually cost. If they come right out and say it’s free forever, then it needs to be free forever. It’s safer to just do what they’ve done here: make it free for an extra year one year at a time, and see how it goes as more and more iPhones that support the feature remain in active use.
It’s a wonderful feature — quite literally life-saving in numerous cases — but it’d be hard to sell. It’s like buying insurance. People like paying for stuff they want to use, not for stuff they hope they never need. Obviously, people do buy insurance — Apple itself, of course, sells AppleCare — but how many people would pay extra for Emergency SOS? If Apple can just quietly eat the cost of this service, they should, and I think will.
Andrew Ross Sorkin and Robert D. Hershey Jr., reporting for The New York Times:
Charles T. Munger, who quit a well-established law career to be Warren E. Buffett’s partner and maxim-spouting alter-ego as they transformed a foundering New England textile company into the spectacularly successful investment firm Berkshire Hathaway, died on Tuesday in Santa Barbara, Calif. He was 99.
His death, at a hospital, was announced by Berkshire Hathaway. He had a home in Los Angeles.
Although overshadowed by Mr. Buffett, who relished the spotlight, Mr. Munger, a billionaire in his own right — Forbes listed his fortune as $2.6 billion this year — had far more influence at Berkshire than his title of vice chairman suggested.
Mr. Buffett has described him as the originator of Berkshire Hathaway’s investing approach. “The blueprint he gave me was simple: Forget what you know about buying fair businesses at wonderful prices; instead, buy wonderful businesses at fair prices,” Mr. Buffett once wrote in an annual report. [...]
A $1,000 investment in Berkshire made in 1964 is worth more than $10 million today.
Mr. Munger was often viewed as the moral compass of Berkshire Hathaway, advising Mr. Buffett on personnel issues as well as investments. His hiring policy: “Trust first, ability second.”
A new edition of Munger’s book of aphorisms, Poor Charlie’s Almanack — its title an allusion to Munger’s idol, Benjamin Franklin — is due next week.
AnnaMaria Andriotis, reporting for The Wall Street Journal (News+):
Apple is pulling the plug on its credit-card partnership with Goldman Sachs, the final nail in the coffin of the Wall Street bank’s bid to expand into consumer lending.
The tech giant recently sent a proposal to Goldman to exit from the contract in the next roughly 12-to-15 months, according to people briefed on the matter. The exit would cover their entire consumer partnership, including the credit card the companies launched in 2019 and the savings account rolled out this year.
It couldn’t be learned whether Apple has already lined up a new issuer for the card.
Apple Card is a strange product — everyone I know who has one likes it (including me), but Goldman itself has reported that they’ve lost $3 billion since 2020 on it. The savings accounts are a hit with customers too.
American Express is rumored to be one possible partner, but it would be pretty strange for Apple Cards to transmogrify from MasterCard to Amex cards overnight. There are still a lot of businesses — particularly throughout Europe — that accept MasterCard but not Amex. It’s not just that Apple Card would no longer be accepted at businesses where previously it was, but that would highlight the fact that Apple Card is really just an Apple-branded card issued by a company that isn’t Apple. Apple wants you to think of Apple Card as, well, an Apple credit card.
Ian Hickson, who recently left Google after an 18-year stint:
The lack of trust in management is reflected by management no longer showing trust in the employees either, in the form of inane corporate policies. In 2004, Google’s founders famously told Wall Street “Google is not a conventional company. We do not intend to become one.” but that Google is no more.
Much of these problems with Google today stem from a lack of visionary leadership from Sundar Pichai, and his clear lack of interest in maintaining the cultural norms of early Google. A symptom of this is the spreading contingent of inept middle management. [...]
It’s definitely not too late to heal Google. It would require some shake-up at the top of the company, moving the centre of power from the CFO’s office back to someone with a clear long-term vision for how to use Google’s extensive resources to deliver value to users. I still believe there’s lots of mileage to be had from Google’s mission statement (“to organize the world’s information and make it universally accessible and useful”). Someone who wanted to lead Google into the next twenty years, maximising the good to humanity and disregarding the short-term fluctuations in stock price, could channel the skills and passion of Google into truly great achievements.
I do think the clock is ticking, though. The deterioration of Google’s culture will eventually become irreversible, because the kinds of people whom you need to act as moral compass are the same kinds of people who don’t join an organisation without a moral compass.
This jibes with my perception of Google from the outside. Early Google did two things great:
Neither of those things has been true in recent years, and the responsibility clearly falls on Pichai.
Maggie Harrison, writing for Futurism:
The only problem? Outside of Sports Illustrated, Drew Ortiz doesn’t seem to exist. He has no social media presence and no publishing history. And even more strangely, his profile photo on Sports Illustrated is for sale on a website that sells AI-generated headshots, where he’s described as “neutral white young-adult male with short brown hair and blue eyes.”
Ortiz isn’t the only AI-generated author published by Sports Illustrated, according to a person involved with the creation of the content who asked to be kept anonymous to protect them from professional repercussions. “There’s a lot,” they told us of the fake authors. “I was like, what are they? This is ridiculous. This person does not exist.”
“At the bottom [of the page] there would be a photo of a person and some fake description of them like, ‘oh, John lives in Houston, Texas. He loves yard games and hanging out with his dog, Sam.’ Stuff like that,” they continued. “It’s just crazy.”
The AI authors’ writing often sounds like it was written by an alien; one Ortiz article, for instance, warns that volleyball “can be a little tricky to get into, especially without an actual ball to practice with.”
What an incredible fall from grace for what was, for decades, a truly great magazine. I can see how they thought they’d get away with it, though — Sports Illustrated’s human-written articles are now mostly clickbait junk anyway.
Tangentially related to the last item, here’s Eva Rothenberg reporting for CNN:
Since at least 2019, Meta has knowingly refused to shut down the majority of accounts belonging to children under the age of 13 while collecting their personal information without their parents’ consent, a newly unsealed court document from an ongoing federal lawsuit against the social media giant alleges. [...]
According to the 54-count lawsuit, Meta violated a range of state-based consumer protection statutes as well as the Children’s Online Privacy Protection Rule (COPPA), which prohibits companies from collecting the personal information of children under 13 without a parent’s consent. Meta allegedly did not comply with COPPA with respect to both Facebook and Instagram, even though “Meta’s own records reveal that Instagram’s audience composition includes millions of children under the age of 13,” and that “hundreds of thousands of teen users spend more than five hours a day on Instagram,” the court document states.
One Meta product designer wrote in an internal email that the “young ones are the best ones,” adding that “you want to bring people to your service young and early,” according to the lawsuit.
Not a good look.
The unsealed complaint also alleges that Meta knew that its algorithm could steer children toward harmful content, thereby harming their well-being. According to internal company communications cited in the document, employees wrote that they were concerned about “content on IG triggering negative emotions among tweens and impacting their mental well-being (and) our ranking algorithms taking [them] into negative spirals & feedback loops that are hard to exit from.”
On that last point, Jason Kint posted a long thread on Twitter/X highlighting previously redacted details from the lawsuit.
Jeff Horwitz and Katherine Blunt, reporting for The Wall Street Journal:
The Journal sought to determine what Instagram’s Reels algorithm would recommend to test accounts set up to follow only young gymnasts, cheerleaders and other teen and preteen influencers active on the platform. Instagram’s system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos — and ads for some of the biggest U.S. brands.
The Journal set up the test accounts after observing that the thousands of followers of such young people’s accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults. The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.
In a stream of videos recommended by Instagram, an ad for the dating app Bumble appeared between a video of someone stroking the face of a life-size latex doll and a video of a young girl with a digitally obscured face lifting up her shirt to expose her midriff. In another, a Pizza Hut commercial followed a video of a man lying on a bed with his arm around what the caption said was a 10-year-old girl.
Worse, Meta has known of the Journal’s findings since August and the problem continues:
The Journal informed Meta in August about the results of its testing. In the months since then, tests by both the Journal and the Canadian Centre for Child Protection show that the platform continued to serve up a series of videos featuring young children, adult content and apparent promotions for child sex material hosted elsewhere.
As of mid-November, the center said Instagram is continuing to steadily recommend what the nonprofit described as “adults and children doing sexual posing.”
There’s no plausible scenario where Instagram wants to cater to pedophiles, but it’s seemingly beyond their current moderation capabilities to determine the content of videos at scale. Solving this ought to be their highest priority.
My thanks to Kolide for sponsoring last week at DF. Getting OS updates installed on end user devices should be easy. After all, it’s one of the simplest yet most impactful ways that every employee can practice good security. On top of that, every MDM solution promises that it will automate the process and install updates with no user interaction needed. Yet in the real world, it doesn’t play out like that. Users don’t install updates and IT admins won’t force installs via forced restart.
With Kolide, when a user’s device — be it Mac, Windows, Linux, or mobile — is out of compliance, Kolide reaches out to them with instructions on how to fix it. The user chooses when to restart, but if they don’t fix the problem by a predetermined deadline, they’re unable to authenticate with Okta.
Watch Kolide’s on-demand demo to learn more about how it enforces device compliance for companies with Okta.
Rogue Amoeba:
Transcribe can convert speech from an astonishing 57 languages into text, providing you with a written transcript of any spoken audio. It’s powered by OpenAI’s automatic speech recognition system Whisper, and features two powerful models for fast and accurate transcriptions.
Best of all, unlike traditional transcription services, Transcribe works for free inside of Audio Hijack. There’s absolutely no on-going cost, so you can generate unlimited transcriptions and never again pay a per-minute charge. It’s pretty incredible.
It’s also completely private. When you use Transcribe, everything happens right on your Mac. That means your data is never sent to the cloud, nor shared with anyone else.
This makes for a perfect one-two shot with Retrobatch 2: Audio Hijack is also a node-based media tool (which predates Retrobatch), and this new Transcribe block is also putting powerful machine learning tools into an easily accessible form.
This Transcribe feature in Audio Hijack is also an exemplar of the power of Apple silicon — it works on Intel-based Macs too, but it’s just incredibly fast on Apple silicon (I suspect because of the Neural Engine on every M-series chip).
Gus Mueller, writing at the Flying Meat blog:
In case you’re not aware, Retrobatch is a node-based batch image processor, which means you can mix, match, and combine different operations together to make the perfect workflow. It’s kind of neat. And version 2 is even neater. [...]
Retrobatch is obviously not Flying Meat’s most important app (Acorn would fill that role), but I really do like working on it and there’s a bunch more ideas that I want to implement. I feel like Retrobatch is an app that the Mac needs, and it makes me incredibly happy to read all the nice letters I get from folks when they figure out how to use it in their daily work.
Five years after Retrobatch 1 shipped, I’m happy to see version 2 out in the world. And I can’t wait to see what folks are going to do with it.
“Node-based batch image processor” means that you design and tweak your own image processing workflows not with code, but through a visual drag-and-drop interface. (But you can use code, via nodes for JavaScript, AppleScript, and shell scripts.) You can program your own highly customized image processing workflows without knowing anything about writing code. It’s useful for creating workflows that work on just one image at a time, but Retrobatch really shines for batch processing.
There are a zillion new features in version 2, but the star of the show has to be the new “ML Super Resolution” 4× upscaler: a powerful machine learning model made easily accessible.
I can’t read or play music, and struggle even to clap to a beat, so I would have zero use for this device. But I still want to buy one. Just look at it. Absolutely gorgeous.
Special guest Gabe Rivera, founder of the indispensable news aggregator Techmeme, joins the show to talk about the state of news and social media. Thanksgiving fun for the entire family — turn the volume down on the Packers-Lions game tomorrow and listen to this instead. (Turn the volume back up, of course, for the Commanders-Cowboys game.)
Sponsored by:
Elizabeth Laraki, in an article-length post on Twitter/X
15 years ago, I helped design Google Maps. I still use it everyday. Last week, the team dramatically changed the map’s visual design. I don’t love it. It feels colder, less accurate and less human. But more importantly, they missed a key opportunity to simplify and scale. [...]
So much stuff has accumulated on top of the map. Currently there are ~11 different elements obscuring it:
- Search box
- 8 pills overlayed in 4 rows
- A peeking card for “latest in the area”
- A bottom nav bar
This is a very long way of saying that Google Maps’s app design should be like Apple Maps. In fact, Apple Maps has fewer UI elements obtruding actual map content than she’s proposing for Google Maps.
Nilay Patel and Alex Heath, reporting for The Verge:
Sam Altman will return as CEO of OpenAI, overcoming an attempted boardroom coup that sent the company into chaos over the past several days. Former president Greg Brockman, who quit in protest of Altman’s firing, will return as well.
The company said in a statement late Tuesday that it has an “agreement in principle” for Altman to return alongside a new board composed of Bret Taylor, Larry Summers, and Adam D’Angelo. D’Angelo is a holdover from the previous board that initially fired Altman on Friday. He remains on this new board to give the previous board some representation, we’re told.
People familiar with the negotiations say that the main job of this small initial board is to vet and appoint an expanded board of up to nine people that will reset the governance of OpenAI. Microsoft, which has committed to investing billions in the company, wants to have a seat on that expanded board, as does Altman himself.
The question I’ve focused on from the start of this soap opera is who really controls OpenAI? The board thought it was them. It wasn’t. Matt Levine had the funniest-because-it’s-true take in his Money Stuff column — I don’t want to spoil it, just go there and look at his “slightly annotated” version of OpenAI’s diagram of their corporate structure.
See also: The Wall Street Journal’s compelling story of the drama behind the scenes (News+ link).
More information on the aforelinked secret program that provides U.S. law enforcement with trillions of phone call records, including location data, from the EFF:
“Hemisphere” came to light amidst the public uproar over revelations that the NSA had been collecting phone records on millions of innocent people. However, Hemisphere wasn’t a program revealed by Edward Snowden’s leaks, but rather its exposure was pure serendipity: a citizen activist in Seattle discovered the program when shocking presentations outlining the program were provided to him in response to regular old public records requests.
This slide deck hosted by the EFF is one of those presentations, and worth your attention. The system’s capabilities are terrifying. From page 9 of that deck, highlighting Hemisphere’s “Special Features”:
Dropped Phones — Hemisphere uses special software that analyzes the calling pattern of a previous target phone to find the new number. Hemisphere has been averaging above a 90% success rate when searching for dropped phones.
Additional Phones — Hemisphere utilizes a similar process to determine additional cell phones the target is using that are unknown to law enforcement.
So if a target throws away their phone, switches to a new burner phone, but continues calling the same people, Hemisphere claims a 90 percent success rate identifying that new phone.
- Advanced Results — Hemisphere is able to provide two levels of call detail records for one target number by examining the direct contacts for the original target, and identifying possibly significant numbers that might return useful CDRs.
So the system analyzes not just the phone records of the target, but the records of every single number the target calls.
Page 20 of the deck is highly redacted:
Hemisphere can capture data regarding local calls, long distance calls, international calls, cellular calls [???]
Hemisphere does NOT capture █████████████████████████ subscriber information [???]
Highlights of any basic request include: █████████████████████████ █████████████████████████████████ temporary roaming and location data, and traffic associated with international numbers
I’m using “[???]” to denote spots where I suspect information has been redacted, and “█” to indicate obvious redactions. I sure would love to know what’s redacted there. Again, my mind runs to text messages.
Dell Cameron and Dhruv Mehrotra, reporting for Wired:
A little-known surveillance program tracks more than a trillion domestic phone records within the United States each year, according to a letter Wired obtained that was sent by US senator Ron Wyden to the Department of Justice (DOJ) on Sunday, challenging the program’s legality.
According to the letter, a surveillance program now known as Data Analytical Services (DAS) has for more than a decade allowed federal, state, and local law enforcement agencies to mine the details of Americans’ calls, analyzing the phone records of countless people who are not suspected of any crime, including victims. Using a technique known as chain analysis, the program targets not only those in direct phone contact with a criminal suspect but anyone with whom those individuals have been in contact as well.
The DAS program, formerly known as Hemisphere, is run in coordination with the telecom giant AT&T, which captures and conducts analysis of US call records for law enforcement agencies, from local police and sheriffs’ departments to US customs offices and postal inspectors across the country, according to a White House memo reviewed by Wired. Records show that the White House has, for the past decade, provided more than $6 million to the program, which allows the targeting of the records of any calls that use AT&T’s infrastructure — a maze of routers and switches that crisscross the United States.
In a letter to US attorney general Merrick Garland on Sunday, Wyden wrote that he had “serious concerns about the legality” of the DAS program, adding that “troubling information” he’d received “would justifiably outrage many Americans and other members of Congress.” That information, which Wyden says the DOJ confidentially provided to him, is considered “sensitive but unclassified” by the US government, meaning that while it poses no risk to national security, federal officials, like Wyden, are forbidden from disclosing it to the public, according to the senator’s letter.
Ron Wyden and his office are indispensable on matters related to government surveillance. A few non-obvious aspects worth considering regarding the DAS/Hemisphere program:
The information collected by DAS includes location data.
This is not just about AT&T wireless customers and their phone calls. This is related to the entire U.S. phone system infrastructure — the old Ma Bell. Landline calls and calls from Verizon and T-Mobile cellular customers get routed through this AT&T system, and are thus surveilled by this same system. You can use over-the-top services like iMessage, FaceTime, WhatsApp, or Signal to avoid DAS, but if you place calls using the traditional phone system, you could be impacted even if you’re not an AT&T customer — and you won’t ever know, because you have no idea how your phone calls are routed.
It is completely unclear to me whether DAS/Hemisphere collects text messages — SMS, MMS, RCS — in addition to voice calls. I’ve spent my afternoon trying to find out, and the only answer I’ve gotten is it’s unclear. I hope text messages are not included, but until we get a definitive answer, it’s only safe to assume that text messages are included. (If anyone reading this knows whether DAS includes text message records, please let me know.)
Since last I wrote about the ongoing leadership battle at OpenAI:
OpenAI named a new interim CEO, Twitch co-founder Emmett Shear. (Shear is an AI worrier, who has advocated drastically “slowing down”, writing “If we’re at a speed of 10 right now, a pause is reducing to 0. I think we should aim for a 1-2 instead.”) OpenAI CTO Mira Murati was CEO for about two days.
Satya Nadella announced very late Sunday night, “And we’re extremely excited to share the news that Sam Altman and Greg Brockman, together with colleagues, will be joining Microsoft to lead a new advanced AI research team. We look forward to moving quickly to provide them with the resources needed for their success.”
About 700 OpenAI employees, out of a total of 770, signed an open letter demanding the OpenAI board resign, and threatening to quit to join Altman at Microsoft if they don’t. Among the signees: Mira Murati (which might explain why she’s no longer interim CEO) and chief scientist and board member Ilya Sutskever.
Sutskever posted on Twitter/X: “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”
Alex Heath and Nilay Patel report for The Verge that Altman and Brockman might still return to OpenAI.
Nadella appeared on CNBC and admitted that Altman and Brockman were not officially signed as Microsoft employees yet, and when asked who would be OpenAI’s CEO tomorrow, laughed, because he didn’t know.
See also: Ben Thompson’s crackerjack take on the saga at Stratechery. Long story short: OpenAI’s company structure was a ticking time bomb.
My thanks to Vanta for sponsoring last week at DF. Vanta lets you shortcut compliance — without shortchanging security.
Vanta brings GRC and security efforts together. Integrate information from multiple systems and reduce risks to your business, all without the need for additional staffing. And because Vanta automates up to 90 percent of the work for SOC 2, ISO 27001, and more, you’ll be able to focus on strategy and security, not maintaining compliance.
From the most in-demand frameworks to third-party risk management and security questionnaires, Vanta gives SaaS businesses of all sizes one place to manage risk and prove security in real time.
Try Vanta free for 7 days. No costs or obligations.
Kevin Roose, reporting for The New York Times:
An all-hands meeting for OpenAI employees on Friday afternoon didn’t reveal much more. Ilya Sutskever, the company’s chief scientist and a member of its board, defended the ouster, according to a person briefed on his remarks. He dismissed employees’ suggestions that pushing Mr. Altman out amounted to a “hostile takeover” and claimed it was necessary to protect OpenAI’s mission of making artificial intelligence beneficial to humanity, the person said.
Mr. Altman appears to have been blindsided, too. He recorded an interview for the podcast I co-host, “Hard Fork,” on Wednesday, two days before his firing. During our chat, he betrayed no hint that anything was amiss, and he talked at length about the success of ChatGPT, his plans for OpenAI and his views on A.I.’s future.
Mr. Altman stayed mum about the precise circumstances of his departure on Friday. But Greg Brockman — OpenAI’s co-founder and president, who quit on Friday in solidarity with Mr. Altman — released a statement saying that both of them were “shocked and saddened by what the board did today.” Mr. Altman was asked to join a video meeting with the board at noon on Friday and was immediately fired, Mr. Brockman said.
Kara Swisher was all over the story last night, writing on Twitter/X:
Sources tell me that the profit direction of the company under Altman and the speed of development, which could be seen as too risky, and the nonprofit side dedicated to more safety and caution were at odds. One person on the Sam side called it a “coup,” while another said it was the the right move. [...]
More: The board members who voted against Altman felt he was manipulative and headstrong and wanted to do what he wanted to do. That sounds like a typical SV CEO to me, but this might not be a typical SV company. They certainly have a lot of explaining to do.
According to Brockman — who until he quit in protest of Altman’s firing was chairman of the OpenAI board — he didn’t find out until just 5 minutes before Altman was sacked. I’ve never once heard of a corporate board firing the company’s CEO behind the back of the chairman of the board.
It really does look more and more like a deep philosophical fissure inside OpenAI, between those led by Sutskever (and, obviously, a majority of the board) advocating a cautious slow and genuinely non-profit-driven approach, and Altman/Brockman’s “let’s move fast, change the world, and make a lot of money” side. Sutskever and the OpenAI board seemingly see Altman/Brockman as reckless swashbucklers; Altman and Brockman, I suspect, see Sutskever and his side as a bunch of ninnies.
A simple way to look at it is to read OpenAI’s charter, “the principles we use to execute on OpenAI’s mission”. It’s a mere 423 words, and very plainly written. It doesn’t sound anything at all like the company Altman has been running. The board, it appears, believes in the charter. How in the world it took them until now to realize Altman was leading OpenAI in directions completely contrary to their charter is beyond me.
It’s like the police chief in Casablanca being “shocked — shocked!” to find out that gambling was taking place in a casino where he played. ★
Monica Chin, reporting for The Verge last month, “Qualcomm Claims Its Snapdragon X Elite Processor Will Beat Apple, Intel, and AMD”:
Qualcomm has announced its new Snapdragon X Elite platform, which looks to be its most powerful computing processor to date. The chips (including the new Qualcomm Oryon, announced today) are built on a 4nm process and include 136GB/s of memory bandwidth. PCs are expected to ship in mid-2024. [...]
Oh, Qualcomm also claims that its chip will deliver “50% faster peak multi-thread performance” than Apple’s M2 chip. This is just a funny claim; the X Elite has 50 percent more cores than the M2 and sucks down much more power, so of course it is going to do better on Geekbench at “peak multi-thread performance.” That’s like a professional sprinter bragging about winning the 100-meter dash against a bunch of marathon champions.
This news is so old that Chin is no longer on the staff at The Verge (which I think explains why she didn’t write either of their reviews for the new M3 MacBook Pros), but I’m cleaning up old tabs and wanted to comment on this.
It’s nonsense. Chips that aren’t slated to appear in any actual laptops until “mid-2024” are being compared to the M2, which Apple debuted with the MacBook Air in June 2022. So even if Qualcomm’s performance claims are true and PCs based on their chips ship on schedule, they’re comparing against a chip that Apple debuted two entire years earlier.
Plus they’re only comparing multi-core performance against the base M2. And they’re not really comparing multi-core performance overall but “peak” performance, however it is they define that. And the fact that they only mention multi-core performance strongly suggests that they’re slower than the M2 at single-core performance, which for most consumer/prosumer use cases is more important.
And: No one in the PC world seems to care about ARM chips, at least for laptops. Microsoft made a go of it with their Surface line and largely gave up. My understanding is that fewer than 1 percent of PC sales today are ARM-based machines. If Microsoft wasn’t willing to optimize Windows to make it ARM-first, or even treat ARM as an equal to x86, when they themselves were trying to make ARM-based Windows laptops a thing, why would they do it now?
If Mac hardware and MacOS were made by separate companies, and the MacOS software company licensed their OS to other OEMs, I really don’t think Apple silicon hardware would have happened. The seemingly too-good-to-be-true performance of Apple silicon Macs is the result of the silicon being designed for the software and the software being optimized and at very low levels designed for the silicon. Qualcomm isn’t going to get that from Microsoft with Windows.
Qualcomm’s X Elite platform may well beat Intel and AMD, but I’m not sure that will matter in the PC world unless Microsoft truly goes all-in on ARM with Windows. Which I don’t see happening. But the idea that they’re even vaguely catching up to Apple silicon is laughable, and it’s frustrating that so much of the tech press took anything Qualcomm claimed about relative performance against Apple silicon seriously.
We know for a fact that their Snapdragon chips for phones have always lagged years behind Apple’s A-series chips in both sheer performance and performance-per-watt, with no sign that they’re catching up. So how in the world would their ARM chips for PCs beat Apple’s M-series chips?
And, yes, I predicted this back in November 2021, when Qualcomm claimed they’d be shipping “M-series competitive” chips for PCs by 2023. Qualcomm claimed to still be on track to ship in 2023 just one year ago, so I wouldn’t hold my breath for “mid-2024” either. ★
Ina Fried, reporting for Axios:
Apple is pausing all advertising on X, the Elon Musk-owned social network, sources tell Axios. The move follows Musk’s endorsement of antisemitic conspiracy theories as well as Apple ads reportedly being placed alongside far-right content. Apple has been a major advertiser on the social media site and its pause follows a similar move by IBM.
Musk faced backlash for endorsing an antisemitic post Wednesday, as 164 Jewish rabbis and activists upped their call to Apple, Google, Amazon and Disney to stop advertising on X, and for Apple and Google to remove it from their platforms.
The left-leaning nonprofit Media Matters for America published a report Thursday that highlighted Apple, IBM, Amazon and Oracle as among those whose ads were shown next to far-right posts.
It’s fair, in some sense, to describe Media Matters for America as “left-leaning”, but they have a decades-long reputation for accuracy. They don’t publish hit pieces that are later proven to have been manipulated. There’s no reason to doubt their report or the screenshots it contained — especially if, like me, you still do check in on Twitter. This is what it’s like over there now.
Update: Disney and Warner Bros Discovery have suspended advertising on Twitter/X too.
OpenAI press release:
The board of directors of OpenAI, Inc., the 501(c)(3) that acts as the overall governing body for all OpenAI activities, today announced that Sam Altman will depart as CEO and leave the board of directors. Mira Murati, the company’s chief technology officer, will serve as interim CEO, effective immediately. [...]
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
By corporate board standards, that statement is absolutely scathing. There’s widespread speculation some sort of scandal must be at the heart of this, but no word yet what.
i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.
will have more to say about what’s next later.
🫡
That all-lowercase style is consistent for Altman’s personal tweets, but man does it look extra fucking silly in a post like this one.
Just last week Altman hosted OpenAI’s first DevDay keynote (where he did great), and Cade Metz reports for The New York Times, as late as yesterday Altman was still publicly representing OpenAI:
On Thursday evening, Mr. Altman appeared at an event in Oakland, Calif., where he discussed the future of art and artists now that artificial intelligence can generate images, videos, sounds and other forms of art on its own. Giving no indication that he was leaving OpenAI, he repeatedly said he and the company would continue to work alongside artists and help to ensure their future would be bright.
Earlier in the day, he appeared at the Asia-Pacific Economic Cooperation CEO Summit in San Francisco with Laurene Powell Jobs, who is the founder and president of the Emerson Collective, and executives from Meta and Google.
(Altman also just publicly dunked on Elon Musk. I’m sure Musk will just let it slide.)
Lastly, Altman, despite being credited as a co-founder, has no equity in OpenAI. Meta, for example, can’t fire Zuckerberg — same deal for Google with Larry Page and Sergey Brin — but OpenAI could sack Altman with a board vote because he owns none of it, much less a controlling interest.
Lance Ulanoff, reporting for TechRadar:
RCS or Rich Communication Services, a communications standard developed by the GSM Association and adopted by much of the Android ecosystem, is designed to universally elevate messaging communication across mobile devices. Even though Apple has been working with the group, it has until this moment steadfastly refused to add RCS support to iPhones. Now, however, Apple is changing its tune.
“Later next year, we will be adding support for RCS Universal Profile, the standard as currently published by the GSM Association. We believe the RCS Universal Profile will offer a better interoperability experience when compared to SMS or MMS. This will work alongside iMessage, which will continue to be the best and most secure messaging experience for Apple users,” said an Apple spokesperson. [...]
When RCS does arrive on your best iPhone, though, it means the end of the “green bubble shame” for your best Android phone-owning friends, family, and coworkers. They’ll be able to send and receive high-resolution photos and videos from their phones to your iPhone. Group messaging could become platform agnostic. And they’ll be able to share their location with you through RCS-supported messaging.
There is, naturally, a wrinkle here. The RCS standard still doesn’t support end-to-end encryption. Apple, which has offered encrypted messaging for over a decade, is kind of a stickler about security. Apple says it won’t be supporting any proprietary extensions that seek to add encryption on top of RCS and hopes, instead, to work with the GSM Association to add encryption to the standard.
Color me surprised by Apple’s change of heart here. Also color me utterly unsurprised that Apple has no intention to support Google’s proprietary extensions to RCS that allow for E2EE. It’s a disgrace, in my opinion, that E2EE wasn’t a foundational part of the RCS spec from the start, but if Apple is going to support RCS, they should support RCS by-the-spec, not Google’s proprietary version.
I suspect that in practical terms, this might have almost no discernible effect on the user experience. RCS messages are not going to be blue — they’re either going to stay green, just like SMS/MMS messages, or perhaps Apple will add a new color (purple?) for RCS. I suspect RCS messages will just remain green though, and it’ll be treated as what it is: the next generation of phone-carrier-based messaging. I think any initial coverage today framing this as the end of the green/blue disparity is totally wrong. It’s just about making green messages higher quality and more reliable. Update: I confirmed with Apple today that RCS messages will be green, just like SMS and MMS messages.
The best part of the experience, for users, is that RCS supports higher-resolution photos and videos. The worst part is that it’s not encrypted, and Apple caving and deciding to support it will expand, rather than contract, the amount of messaging that is not E2EE. And what’s the plan for when (if?) E2EE gets added to the official RCS spec? Will Apple then drop support for non-encrypted RCS messages? Or add some sort of indicator — a badge or different message color — to indicate which messages are truly encrypted and which are not?
There were also reports from just last year that RCS was being abused to send large amounts of spam in India — see reports here from The Verge and 9to5Google.
Jason Kottke:
I just found out today that they made a movie version of Ian Frazier’s classic 1990 New Yorker piece Coyote V. Acme, in which Wile E. Coyote files a product liability lawsuit against the Acme Company. [...]
My excitement was tempered almost immediately by hearing that Warner Bros. has shelved the completed film (starring John Cena & Will Forte and produced by James Gunn) in order to take a $30 million tax write-off.
Walt Disney once said (and the Disney company still oft repeats): “We don’t make movies to make money, we make money to make more movies.”
Warner Brothers Discovery CEO David Zaslav is a vulture and a disgrace. His top priority should be putting good movies out into the world.
Mike Masnick, writing at TechDirt:
And that raises a real question. How does antitrust handle a situation where the company that is so dominant is in that position because is legitimately offers a better product by a wide margin.
I would love to see more real competition in the search space. Bing and DuckDuckGo continue to just not cut it. I’m intrigued by startups like Kagi (which I’ve been using, and which actually seems pretty good), but it’s not clear how antitrust helps companies like that get a wider audience.
I’m increasingly coming to the belief that we’ve spent way too many years equating antitrust with increased competition, when it’s one of the least effective mechanisms for enabling greater competition. The situation with Mozilla and Firefox seems to just put an exclamation point on that. If the DOJ wins this lawsuit, what will it actually do to help get more competition on the market? More competition that is actually good and that people want?
I think it’s pretty clear that Apple and Google’s TAC deal is the result of fair competition, not an obstacle to it. The iPhone is by far the best mobile platform in the world, and Google search the best search engine (or at least the best free one — Kagi is so good I’d say it gives Google a run for money straight up on results quality). Apple is beset on all sides by competitors to the iPhone — including Google itself! Google has always had competition in search. I for one really do think Google has let its search results quality decline, but even so, they’re still undeniably the best among the mainstream search engines.
So of course there’s some sort of lucrative — and mutually beneficial — deal between Apple and Google to keep Google the default for search in Safari.
My thanks to Tailscale for sponsoring last week at DF. Tailscale offers remarkably good ways to manage SSH connections. No more bastions. No more juggling keys. It’s SSH that just works.
Tailscale is completely free of charge to try, and remarkably affordable for use on teams both small and large. Simplify your Kubernetes networking story with their new operator. Try Tailscale for Kubernetes now.
Leah Nylen, reporting for Bloomberg:*
Google pays Apple Inc. 36% of the revenue it earns from search advertising made through the Safari browser, the main economics expert for the Alphabet Inc. unit said Monday.
Kevin Murphy, a University of Chicago professor, disclosed the number during his testimony in Google’s defense at the Justice Department’s antitrust trial in Washington. John Schmidtlein, Google’s main litigator, visibly cringed when Murphy said the number, which was supposed to remain confidential. Both Google and Apple had objected to revealing details publicly about their agreement.
All those developers complaining about Apple’s 70/30 split for App Store revenue should take note: even Google only gets a 64/36 split for search.
* You know.
Where to start with this glorious, decade-in-the-making post from Cabel Sasser? So many old gadgets that still look awesome today. So many others that now look utterly ridiculous. But I think my favorite part is Sasser’s appreciation for Drew Kaplan’s writing style. The catalogs looked utterly professional but the writing was so personal. It was like blogging, but in print.
Yesterday Apple released developer beta 2 of iOS 17.2, the first version of iOS to include support for capturing spatial video with iPhone 15 Pro models. Today came the public beta, enabling the same feature. Apple invited me to New York yesterday, not merely to preview capturing spatial video using an iPhone, but to experience watching those spatial videos using a Vision Pro.
The experience was, like my first Vision Pro demo back at WWDC in June, astonishing.
Shooting spatial video on an iPhone 15 Pro is easy. The feature is — for now at least — disabled by default, and can be enabled in Settings → Camera → Formats. The option is labeled “Spatial Video for Apple Vision Pro”, which is apt, because (again, for now at least) spatial video only looks different from non-spatial video when playing it back on Vision Pro. When viewed on any other device it just looks like a regular flat video.
Once enabled, you can toggle spatial video capture in the Camera app whenever you’re in the regular Video mode. It’s very much akin to the toggle for Live Photos when taking still photos, or the Action mode toggle for video — not a separate mode in the main horizontal mode switcher in Camera, but a toggle button in the main Video mode.
When capturing spatial video, you have no choice regarding resolution, frame rate, or file format. All spatial video is captured at 1080p, 30 fps, in the HEVC file format.1 You also need to hold the phone horizontally, to put the two capturing lenses on the same plane. The iPhone uses the main (1×) and ultra wide (0.5×) lenses for capture when shooting spatial video, and in fact, Apple changed the arrangement of the three lenses on the iPhone 15 Pro in part to support this feature. (On previous iPhone Pro models, when held horizontally, the ultra wide camera was on the bottom, and the main and telephoto lenses were next to each other on the top.)
I believe resolution is limited to 1080p because to get an image from the ultra wide 0.5× camera that is the equivalent field of view as from the main 1× camera, it needs to crop the ultra wide image significantly. The ultra wide camera has a mere 12 MP sensor, so there just aren’t enough pixels to crop a 1× equivalent field of view from the center of the sensor and get a 4K image.
There are two downsides to shooting spatial video with your iPhone. First, the aforementioned 1080p resolution and 30 fps frame rate. I’ve been shooting 4K video by default for years, because why not? I wish we could capture spatial video at 4K, but alas, not yet. The second downside to shooting spatial video is that it effectively doubles the file size compared to non-spatial 1080p, for the obvious reason that each spatial video contains two 1080p video streams. That file-size doubling is a small price to pay — the videos are still smaller than non-spatial 4K 30 fps video.2
Really, it’s just no big deal to capture spatial video on your iPhone. If the toggle button is off, you capture regular video with all the regular options for resolution (720p/1080p/4K) and frame rates (24/30/60). If the toggle for spatial video is on — and when on it’s yellow, impossible to miss — you lose those choices but it just looks like capturing a regular video. And when you play it back, or share it with others who are viewing it on regular devices like their phones or computers, it just looks like a regular flat video.
If you own an iPhone 15 Pro, there’s no good reason not to start capturing spatial videos this year — like, say, this holiday season — to record any sort of moments that feel like something you might want to experience as “memories” with a Vision headset in the future, even if you don’t plan to buy the first-generation Vision Pro next year.
Before my demo, I provided Apple with my eyeglasses prescription, and the Vision Pro headset I used had appropriate corrective lenses in place. As with my demo back in June, everything I saw through the headset looked incredibly sharp.
Apple has improved and streamlined the onboarding/calibration process significantly since June. There are a few steps where you’re presented with a series of dots in a big circle floating in front of you, like the hour indexes on a clock. As you look at each circle, it lights up a bit, and you do the finger tap gesture. It’s the Vision Pro’s way to calibrate that what it thinks you’re looking is what you actually are looking at. Once that calibration step was over — and it took just a minute or two — I was in, ready to go on the home screen of VisionOS. (And the precision of this calibration is amazing — UI elements can be placed relatively close to each other and it knows exactly which one you’re looking at when you tap. iOS UI design needs to be much more forgiving of our relatively fat fingertips than VisionOS UI design needs to be about the precision of our gaze.)
My demo yesterday was expressly limited to photography in general, and spatial video in particular, and so my demo was, per Apple’s request, limited to the Photos app in VisionOS. It was tempting, at times, to see where else I could go and what else I could do. But there was so much to see and do in Photos alone that my demo — about 30 minutes in total wearing Vision Pro — raced by.
Prior to separating us in smaller rooms for our time using Vision Pro, I was paired with Joanna Stern from The Wall Street Journal for a briefing on the details of spatial video capturing using the iPhones 15 Pro. We were each provided with a demo iPhone, and were allowed to capture our own spatial videos. We were in a spacious modern high-ceiling’d kitchen, bathed in natural light from large windows. A chef was busy preparing forms of sushi, and served as a model for us to shoot. Joanna and I also, of course, shot footage of each other, while we shot each other. It was very meta. The footage we captured ourselves was then preloaded onto the respective Vision Pro headsets we used for our demos.
We were not permitted to capture spatial video on Vision Pro.3 However, our demo units had one video in the Photos library that was captured on Vision Pro — a video I had experienced before, back in June, of a group of twenty-somethings sitting around a fire pit at night, having fun in a chill atmosphere. There were also several other shot-by-Apple spatial videos which were captured using an iPhone 15 Pro.
One obvious question: How different do spatial videos captured using iPhone 15 Pro look from those captured using a Vision Pro itself? Given that Apple provided only one example spatial video captured on Vision Pro, I don’t feel like I can fully answer that based on my experience yesterday. It did not seem like the differences were dramatic or significant. The spatial videos shot using iPhone 15 Pro that I experienced, including those I captured myself, seemed every bit as remarkable as the one captured using Vision Pro.
Apple won’t come right out and say it but I do get the feeling that all things considered, spatial video captured using Vision Pro will be “better”. The iPhone might win out on image quality, given the fact that the 1× main camera on the iPhone 15 Pro is the single best camera system Apple makes, but the Vision Pro should win out on spatiality — 3D-ness — because Vision Pro’s two lenses for spatial video capture are roughly as far apart from each other as human eyes. The two lenses used for capture on an iPhone are, of course, much closer to each other than any pair of human eyes. But despite how close the two lenses are to each other, the 3D effect is very compelling on spatial video captured on an iPhone. It’s somehow simultaneously very natural-looking and utterly uncanny.
It’s a stunning effect and remarkable experience to watch them. And so the iPhone, overall, is going to win out as the “best” capture device for spatial video — even if footage captured on Vision Pro is technically superior — because, as the old adage states, the best camera is the one you have with you. I have my iPhone with me almost everywhere. That will never be even close to true for Vision Pro next year.
Here’s what I wrote about spatial video back in June, after my first hands-on time with Vision Pro:
Spatial photos and videos — photos and videos shot with the Vision Pro itself — are viewed as a sort of hybrid between 2D content and fully immersive 3D content. They don’t appear in a crisply defined rectangle. Rather, they appear with a hazy dream-like border around them. Like some sort of teleportation magic spell in a Harry Potter movie or something. The effect reminded me very much of Steven Spielberg’s Minority Report, in the way that Tom Cruise’s character could obsessively watch “memories” of his son, and the way the psychic “precogs” perceive their visions of murders about to occur. It’s like watching a dream, but through a portal opened into another world.
When you watch regular (non-spatial) videos using Vision Pro, or view regular still photography, the image appears in a crisply defined window in front of you. Spatial videos don’t appear like that at all. I can’t describe it any better today than I did in June: it’s like watching — and listening to — a dream, through a hazy-bordered portal opened into another world.
Several factors contribute to that dream-like feel. Spatial videos don’t look real. It doesn’t look or feel like the subjects are truly there in front of you. That is true of the live pass-through video you see in Vision Pro, of the actual real world around you. That pass-through video of actual reality is so compelling, so realistic, that in both my demo experiences to date I forgot that I was always looking at video on screens in front of my eyes, not just looking through a pair of goggles with my eyes’ own view of the world around me.
So Vision Pro is capable of presenting video that looks utterly real — because that’s exactly how pass-through video works and feels. Recorded spatial videos are different. For one thing, reality is not 30 fps, nor is it only 1080p. This makes spatial videos not look low-resolution or crude, per se, but rather more like movies. The upscaled 1080p imagery comes across as film-like grain, and the obviously-lower-than-reality frame rate conveys a movie-like feel as well. Higher resolution would look better, sure, but I’m not sure a higher frame rate would. Part of the magic of movies and TV is that 24 and 30 fps footage has a dream-like aspect to it.
Nothing you’ve ever viewed on a screen, however, can prepare you for the experience of watching these spatial videos, especially the ones you will have shot yourself, of your own family and friends. They truly are more like memories than videos. The spatial videos I experienced yesterday that were shot by Apple looked better — framed by professional photographers, and featuring professional actors. But the ones I shot myself were more compelling, and took my breath away. There’s my friend, Joanna, right in front of me — like I could reach out and touch her — but that was 30 minutes ago, in a different room.
Prepare to be moved, emotionally, when you experience this.
My briefing and demo experience yesterday was primarily about capturing spatial video on iPhone 15 Pro and watching it on Vision Pro, but my demo went through the entire Photos app experience in VisionOS.
Plain old still photos look amazing. You can resize the virtual window in which you’re viewing photos to as large as you can practically desire. It’s not merely like having a 20-foot display — a size far more akin to that of a movie theater screen than a television. It’s like having a 20-foot display with retina quality resolution, and the best brightness and clarity of any display you’ve ever used. I spend so much time looking at my own iPhone-captured still photos on my iPhone display that it’s hard to believe how good they can look blown up to billboard-like dimensions. Just plain still photos, captured using an iPhone.
And then there are panoramic photos. Apple first introduced Pano mode back in 2012, with the introduction of the iPhone 5. That feature has never struck me as better than “Kind of a cool trick”. In the decade since the feature has been available, I’ve only taken about 200 of them. They just look too unnatural, too optically distorted, when viewed on a flat display. And the more panoramic you make them, the more unnatural they look when viewed flat.
Panoramic photos viewed using Vision Pro are breathtaking.
There is no optical distortion at all, no fish-eye look. It just looks like you’re standing at the place where the panoramic photo was taken — and the wider the panoramic view at capture, the more compelling the playback experience is. It’s incredible, and now I wish I’d spent the last 10 years taking way more of them.
As a basic rule, going forward, I plan to capture spatial videos of people, especially my family and dearest friends, and panoramic photos of places I visit. It’s like teleportation.
The Vision Pro experience is highly dependent upon foveated rendering, which Wikipedia succinctly describes as “a rendering technique which uses an eye tracker integrated with a virtual reality headset to reduce the rendering workload by greatly reducing the image quality in the peripheral vision (outside of the zone gazed by the fovea).” Our retinas work like this too — we really only see crisply what falls on the maculas at the center of our retinas. Vision Pro really only renders at high resolution what we are directly staring at. The rest is lower-resolution, but that’s not a problem, because when you shift your gaze, Vision Pro is extraordinarily fast at updating the display.
I noticed yesterday that if I darted my eyes from one side to the other fast enough, I could sort of catch it updating the foveation. Just for the briefest of moments, you can catch something at less than perfect resolution. I think. It is so fast fast fast at tracking your gaze and updating the displays that I can’t be sure. It’s just incredible, though, how detailed and high resolution the overall effect is. My demo yesterday was limited to the Photos app, but I came away more confident than ever that Vision Pro is going to be a great device for reading and writing — and thus, well, work.
The sound quality of the speakers in the headset strap is impressive. The visual experience of Vision Pro is so striking — I mean, the product has “Vision” in its name for a reason — that the audio experience is easy to overlook, but it’s remarkably good.
Navigating VisionOS with your gaze and finger taps is so natural. I’ve spent a grand total of about an hour, spread across two 30-minute demos, using Vision Pro, but I already feel at home using the OS. It’s an incredibly natural interaction model based simply on what you are looking at. My enthusiasm for this platform, and the future of spatial computing, could not be higher. ★
The HEVC spec allows for a single file to contain multiple video streams. That’s what Apple is doing, with metadata describing which stream is “left” and which is “right”. Apple released preliminary documentation for this format back in June, just after WWDC. ↩︎
According to Apple, these are the average file sizes per minute of video:
• Regular 1080p 30 fps: 65 MB
• Spatial 1080p 30 fps: 130 MB
• Regular 4K 30 fps: 190 MB ↩︎︎
Paraphrased:
“This is the digital crown. You’ll be using this today to adjust the immersiveness by turning it, and you’ll press the crown if you need to go back to the home screen. On the other side is the button you would use to capture photos or videos using Vision Pro. We won’t be using that button today.”
“But does that button work? If I did press that button, would it capture a photo or video?”
“Please don’t press that button.” ↩︎︎
Filipe Espósito, with a nice scoop for 9to5Mac:
By analyzing the new API, we’ve learned that it has an extension endpoint declared in the system, which means that other apps can create extensions of this type. Digging even further, we found a new, unused entitlement that will give third-party apps permission to install other apps. In other words, this would allow developers to create their own app stores.
The API has basic controls for downloading, installing, and even updating apps from external sources. It can also check whether an app is compatible with a specific device or iOS version, which the App Store already does. Again, this could easily be used to modernize MDM solutions, but here’s another thing.
We also found references to a region lock in this API, which suggests that Apple could restrict it to specific countries. This wouldn’t make sense for MDM solutions, but it does make sense for enabling sideloading in particular countries only when required by authorities — such as in the European Union.
Makes sense. If Apple winds up being forced to allow sideloading in the EU, I think they will only allow it in the EU. Apple still has a pending appeal, but I doubt they’ll win, and the deadline for compliance is March of next year, which probably means iOS 17.3 or 17.4.
New episodes usually drop at the crack of dawn, but today’s Dithering was held until 10am PT/1pm ET due to an embargo. Yesterday I got hands-on experience shooting spatial video using an iPhone 15 Pro and watching those videos that I shot — along with others, recorded by Apple — on a Vision Pro headset. I’m blown away once again. I’ll have a column describing my experience out later this afternoon, but Dithering subscribers can hear my thoughts now.
Dithering as a standalone subscription costs just $5/month or $50/year. You get two episodes per week, each exactly 15 minutes long. You can also get it as part of my co-host Ben Thompson’s Stratechery Plus bundle — a veritable steal at just $12/month or $120/year. I just love having an outlet like Dithering for weeks like this one. People who try Dithering seem to love it, too — we have remarkably little churn.
10-minute intro video from Humane co-founders Imran Chaudhri and Bethany Bongiorno. Really curious to see reviews of this. They really do mean for this to replace, not supplement, your phone.
Juli Clover, MacRumors:
The second beta of iOS 17.2 adds a new feature that allows an iPhone 15 Pro or iPhone 15 Pro Max to record Spatial Video that can be viewed in the Photos app on Apple’s forthcoming Apple Vision Pro headset.
Spatial Video recording can be enabled by going to the Settings app, tapping into the Camera section, selecting Formats, and toggling on “Spatial Video for Apple Vision Pro.”
It’s off by default, but I think will be on by default in the future, if you have a Vision headset associated with your iCloud account. When enabled, you get an extra button in the viewfinder area of the Camera app when you’re in Video mode, similar to the Macro toggle or the toggle for Action mode. Very simple. Spatial videos captured on an iPhone are always and only 1080p / 30fps. And when you view them on a normal screen — i.e. any device other than a Vision headset — they just look like regular 1080p videos.
David Pierce at The Verge:
Humane has been teasing its first device, the AI Pin, for most of this year. It’s scheduled to launch the Pin on Thursday, but The Verge has obtained documents detailing practically everything about the device ahead of its official launch. What they show is that Humane, the company noisily promoting a world after smartphones, is about to launch what amounts to a $699 wearable smartphone without a screen that has a $24 a month subscription fee, and which runs on a Humane-branded version of T-Mobile’s network with access to AI models from Microsoft and OpenAI.
I sincerely hope I’m wrong, but this sounds like the Quibi of AI gadgets.
You may recall back in June, I linked to a story by Mark Wilson about the creation of LoveFrom Serif, a new modern revival of John Baskerville’s legendary type designs created by some of the designers behind Apple’s San Francisco.
Designer Antonio Cavedoni presented a lecture on its design, and that video is now available. Splendid.
Apple Newsroom yesterday:
Today Apple announced updates to Final Cut Pro across Mac and iPad, offering powerful new features that help streamline workflows. Final Cut Pro now includes improvements in timeline navigation and organization, as well as new ways to simplify complex edits. The apps leverage the power-efficient performance of Apple silicon along with an all-new machine learning model for Object Tracker, and export speeds are turbocharged on Mac models powered by multiple media engines. Final Cut Pro for iPad brings new features to further enhance the portable Multi-Touch editing experience, including support for voiceover recording, expanded in-app content options, added color-grading presets, and workflow improvements. These updates to Final Cut Pro will be available later this month on the App Store.
I mentioned last week that video editors took notice that Apple’s behind-the-scenes look at their “Scary Fast” keynote showed the film being editing in Premiere Pro, not Final Cut Pro, and that it wasn’t helping allay the fears of Final Cut Pro devotees that Apple was losing interest in it, a la the still-lamented Aperture.
It occurred to me then that the best evidence that Apple remains keenly committed to Final Cut Pro is that they (finally?) ported it, along with Logic Pro, to the iPad earlier this year — and that the iPad versions are good.
Now, as if on cue, Apple has announced significant feature updates to Final Cut Pro (and Logic Pro) for both Mac and iPad.
Earlier today I was making a screen recording on my iPhone to share with a friend, and during the recording, a notification for a text message arrived. It wasn’t particularly personal, but I made another take of the screen recording anyway. This has happened to me numerous times before — sort of a Murphy’s Law thing, like buttered toast always seeming to fall wrong-side down. It occurred to me that Do Not Disturb should turn on automatically when you’re recording your screen, and I posted the idea to Mastodon.
Glad I did. Because Cabel Sasser replied that it already worked like that for him, and then Jason Walke — my favorite person in the world today — explained that this is enabled by turning on “Smart Activation” for Do Not Disturb mode.
Go to Settings → Focus → Do Not Disturb, and under “Set a Schedule”, tap Add Schedule and enable Smart Activation. Now, when you start a screen recording, Do Not Disturb turns on automatically, and it turns itself off a few seconds after you finish recording. I’m not sure what else triggers Smart Activation — the description in Settings is pretty vague — but for screen recordings this is just what I wanted.
(Ryan Jones chimed in with a good idea: an option to automatically beautify the status bar in screen recordings: removing any extraneous icons (like the Do Not Disturb moon), showing perfect cellular and Wi-Fi signal strength, and having the battery show 100 percent. “Presenter Mode” more or less.)
Aamir Siddiqui, Android Authority:
Summarizing the strings above, it seems Magic Editor will refuse to edit:
- Photos of ID cards, receipts, and other documents that violate Google’s GenAI terms.
- Images with personally identifiable information.
- Human faces and body parts.
- Large selections or selections that need a lot of data to be generated.
Reminiscent of the way that photocopiers and image editing apps like Photoshop try (and largely succeed) at not allowing you to edit or manipulate images of currency.
Isaac Arnsdorf, Josh Dawsey, and Devlin Barrett, reporting Sunday for The Washington Post:
Donald Trump and his allies have begun mapping out specific plans for using the federal government to punish critics and opponents should he win a second term, with the former president naming individuals he wants to investigate or prosecute and his associates drafting plans to potentially invoke the Insurrection Act on his first day in office to allow him to deploy the military against civil demonstrations.
In private, Trump has told advisers and friends in recent months that he wants the Justice Department to investigate onetime officials and allies who have become critical of his time in office, including his former chief of staff, John F. Kelly, and former attorney general William P. Barr, as well as his ex-attorney Ty Cobb and former Joint Chiefs of Staff chairman Gen. Mark A. Milley, according to people who have talked to him, who, like others, spoke on the condition of anonymity to describe private conversations. Trump has also talked of prosecuting officials at the FBI and Justice Department, a person familiar with the matter said.
In public, Trump has vowed to appoint a special prosecutor to “go after” President Biden and his family. The former president has frequently made corruption accusations against them that are not supported by available evidence.
To facilitate Trump’s ability to direct Justice Department actions, his associates have been drafting plans to dispense with 50 years of policy and practice intended to shield criminal prosecutions from political considerations. Critics have called such ideas dangerous and unconstitutional.
Trump and his supporters remain the biggest threat to America, and to the concept of liberal democracy itself, the world has ever seen. These are plans for a dictatorship, pure and simple.
Tom Nichols, writing at The Atlantic:
Trump is, to put it mildly, an emotionally disordered man. But such men are usually only a hazard to their families and themselves, especially if they lack money or power. Trump has both, but even more important, he has people around him willing to use that money and power against American democracy. As the Post report reveals, these henchmen are now trying to turn Trump’s ravings into an autocratic program; without their aid, Trump would be just another motormouthed New York executive living on inherited money and holding court over a charred steak while the restaurant staff roll their eyes. With their support, however, he is an ongoing menace to the entire democratic order of the United States. [...]
The coalition of prodemocracy voters — I am one of them — is shocked at the relative lack of outrage when Trump says hideous things. (The media’s complacency is a big part of this problem, but that’s a subject for another day.) For many of us, it feels as if Trump put up a billboard in Times Square that says “I will end democracy and I will in fact shoot you in the middle of Fifth Avenue if that’s what it takes to stay in power” and no one noticed.
Daniel Eran Dilger, writing at AppleInsider:
I’m not the first person to be saved by paramedics alerted by an emergency call initiated by Crash Detection. There have also been complaints of emergency workers inconvenienced by false alert calls related to events including roller coasters, where the user didn’t cancel the emergency call in time.
But I literally have some skin in the game with this new feature because Crash Detection called in an emergency response for me as I was unconscious and bleeding on the sidewalk, alone and late at night. According to calls it made, I was picked up and on my way to an emergency room within half an hour.
Because my accident occurred in a potentially dangerous and somewhat secluded area, I would likely have bled to death if the call hadn’t been automatically placed.
Man, what a harrowing story. Best wishes to Dilger for a speedy recovery.