By John Gruber
Upgraded — Get a new MacBook every two years. From $36.06/month with AppleCare+ included.
Rene Ritchie, responding to this piece at Wired positing that iOS 11’s Core ML machine learning engine could be a privacy problem:
For an example of where that could go wrong, think of a photo filter or editing app that you might grant access to your albums. With that access secured, an app with bad intentions could provide its stated service, while also using Core ML to ascertain what products appear in your photos, or what activities you seem to enjoy, and then go on to use that information for targeted advertising.
Also nothing to do with Core ML. Smart spyware would try to convince you to give it all your photos right up front. That way it wouldn’t be limited to preconceived models or be at risk of removal or restriction. It would simply harvest all your data and then run whatever server-side ML it wanted to, whenever it wanted to.
That’s the way Google, Facebook, Instagram, and similar photo services that run targeted ads against those services already work.
To present Core ML as a privacy risk is talking about a hypothetical risk while Google and Facebook are currently ransacking your privacy.
★ Thursday, 26 October 2017