By John Gruber
Jiiiii — All your anime stream schedules in one place.
Lengthy profile on Apple’s AI efforts by Steven Levy, for Backchannel:
Probably the biggest issue in Apple’s adoption of machine learning is how the company can succeed while sticking to its principles on user privacy. The company encrypts user information so that no one, not even Apple’s lawyers, can read it (nor can the FBI, even with a warrant). And it boasts about not collecting user information for advertising purposes.
While admirable from a user perspective, Apple’s rigor on this issue has not been helpful in luring top AI talent to the company. “Machine learning experts, all they want is data,” says a former Apple employee now working for an AI-centric company. “But by its privacy stance, Apple basically puts one hand behind your back. You can argue whether it’s the right thing to do or not, but it’s given Apple a reputation for not being real hardcore AI folks.”
This view is hotly contested by Apple’s executives, who say that it’s possible to get all the data you need for robust machine learning without keeping profiles of users in the cloud or even storing instances of their behavior to train neural nets. “There has been a false narrative, a false trade-off out there,” says Federighi. “It’s great that we would be known as uniquely respecting user’s privacy. But for the sake of users everywhere, we’d like to show the way for the rest of the industry to get on board here.”
This is the crux of the whole piece, to my mind. The AI community is largely focused on privacy-invasive data collection and doing the computation in the cloud. Apple’s approach protects privacy by keeping the data (and performing the computation) on the device.
The other interesting angle in the piece is about most researchers wanting to publish their work, whereas Apple is attracting those who are more interested in the products themselves. But Apple is allowing their researchers on differential privacy to publish their work.
★ Thursday, 25 August 2016