By John Gruber
WorkOS: APIs to ship SSO, SCIM, FGA, and User Management in minutes. Check out their launch week.
Ben Sandofsky, writing at the Halide blog:
“Doesn’t the iPhone XR do that? It also only has a single camera!”, you might say. As we covered previously, while the iPhone XR has a single camera, it still obtained depth information through hardware. Its sensor features focus pixels, which you can think of as tiny pairs of “eyes” designed to help with focus. The XR uses the very slight differences seen out of each eye to generate a very rough depth map.
The new iPhone SE doesn’t have focus pixels, or any other starting point for depth. It generates depth entirely through machine learning. It’s easy to test this yourself: take a picture of another picture.
The fact that the new SE apparently has the exact same sensor as the iPhone 8, but is noticeably more capable, exemplifies the potential of computational photography. Remember too, that everything Apple does for the SE can also be applied to iPhones (and iPads) that do have multiple cameras and focus pixels on their sensors. A rising tide lifts all boats.
★ Monday, 27 April 2020