By John Gruber
WorkOS: APIs to ship SSO, SCIM, FGA, and User Management in minutes. Check out their launch week.
Jon Porter, The Verge:
Samsung has published an English-language blog post explaining the techniques used by its phones to photograph the Moon. The post’s content isn’t exactly new — it appears to be a lightly edited translation of an article posted in Korean last year — and doesn’t offer much new detail on the process. But, because it’s an official translation, we can more closely scrutinize its explanation of what Samsung’s image processing technology is doing.
There’ve been a couple of follow-ups on this since I wrote about it a few weeks ago. Marques Brownlee posted a short video, leaning into the existential question of the computation photography era: “What is a photo?” Input’s Ray Wong took umbrage at my having said he’d been “taken” by Samsung’s moon photography hype in this Twitter thread.
Here’s a clarifying way of thinking about it. What Samsung is doing with photographs of the moon is fine as a photo editing feature. It is not, however, a camera feature. With computational photography there is no clear delineation between what’s part of the camera imaging pipeline and what’s a post-capture editing feature. There’s a gray zone, to be sure. But this moon shot feature is not in that gray zone. It’s post-capture photo editing, even if it happens automatically — closer to Photoshop than to photography.
Where I draw the line is whether the software is clarifying reality as captured by the camera or not. Is the software improving/upscaling the input, or substituting the input with imagery that doesn’t originate from the camera? Here’s a snippet of a debate on Twitter, from Sebastiaan de With (at the helm of the Halide camera app account):
One can argue “Well, it’s the moon, it’s always the same” — and perhaps that’s true, but the issue is with photographic accuracy. In-fill should be informed by underlying input data and shape the output image; you can argue output shouldn’t reshape the input this significantly.
And that’s my point. What if the moon weren’t the same? What if it gets hit by a large meteor, creating a massive new visible-from-earth crater? Or what if our humble friend Phony Stark blows tens of billions of dollars erecting a giant billboard on the surface of the moon, visible from earth, that reads “@elonmusk”? A photo of the moon taken with one of these Samsung phones wouldn’t show either of those things, yet would appear to capture a detailed image of the moon’s surface. A camera should capture the moon as it is now, and computational photography should help improve the detail of that image of the moon as it appears now. Samsung’s phones are rendering the moon as it was, at some point in the past when this ML model was trained.
And that’s where Samsung steps over the line into fraud. Samsung, in its advertisements, is clearly billing these moon shots as an amazing feature enabled by its 10× optical / 100× digital zoom telephoto camera lens. They literally present it as optically superior to a telescope. That’s bullshit. A telescope shows you the moon as it is. Samsung’s cameras do not.
★ Monday, 20 March 2023