By John Gruber
Jiiiii — All your anime stream schedules in one place.
Stephen Shankland, writing for CNet on the ways “computational photography” improve the cameras in the new Google Pixel phones:
Some of Google’s investment in camera technology takes the form of AI, which pervades just about everything Google does these days. The company won’t disclose all the areas the Pixel 2 camera uses machine learning and “neural network” technology that works something like human brains, but it’s at least used in setting photo exposure and portrait-mode focus.
Neural networks do their learning via lots of real-world data. A neural net that sees enough photographs labeled with “cat” or “bicycle” eventually learns to identify those objects, for example, even though the inner workings of the process aren’t the if-this-then-that sorts of algorithms humans can follow.
“It bothered me that I didn’t know what was inside the neural network,” said Levoy, who initially was a machine-learning skeptic. “I knew the algorithms to do things the old way. I’ve been beat down so completely and consistently by the success of machine learning” that now he’s a convert.
★ Tuesday, 17 October 2017