Major new policy from WebKit, with inspiration credit given to Mozilla:
We treat circumvention of shipping anti-tracking measures with the
same seriousness as exploitation of security vulnerabilities.
If a party attempts to circumvent our tracking prevention methods,
we may add additional restrictions without prior notice. These
restrictions may apply universally; to algorithmically classified
targets; or to specific parties engaging in circumvention.
We do not grant exceptions to our tracking prevention technologies
to specific parties. Some parties might have valid uses for
techniques that are also used for tracking. But WebKit often has
no technical means to distinguish valid uses from tracking, and
doesn’t know what the parties involved will do with the collected
data, either now or in the future.
There are practices on the web that we do not intend to disrupt,
but which may be inadvertently affected because they rely on
techniques that can also be used for tracking. We consider this to
be unintended impact.
Equating tracking with malware and security exploits is a major policy change, and absolutely correct. Notably, they are not respecting commercial interests at all. The user’s privacy comes first, and if there is commercial collateral damage from that, fuck it:
WebKit will do its best to prevent all covert tracking, and all
cross-site tracking (even when it’s not covert). These goals apply
to all types of tracking listed above, as well as tracking
techniques currently unknown to us.
If a particular tracking technique cannot be completely prevented
without undue user harm, WebKit will limit the capability of
using the technique. For example, limiting the time window for
tracking or reducing the available bits of entropy — unique data
points that may be used to identify a user or a user’s behavior.
Hopefully, this will help close the email tracking-pixel loophole as well.
The ball is now in Chrome’s court to follow suit. I think Google could aggressively close these same privacy-invasive loopholes without losing their ability to serve targeted ads — they’d simply be limited to serving targeted ads to users who sign into Chrome with their Google accounts.
Giles Turner and Mark Bergen, reporting for Bloomberg*:
The co-founder of DeepMind, the high-profile artificial
intelligence lab owned by Google, has been placed on leave after
controversy over some of the projects he led.
Mustafa Suleyman runs DeepMind’s “applied” division, which seeks
practical uses for the lab’s research in health, energy and other
fields. Suleyman is also a key public face for DeepMind, speaking
to officials and at events about the promise of AI and the ethical
guardrails needed to limit malicious use of the technology.
“Mustafa is taking time out right now after 10 hectic years,” a
DeepMind spokeswoman said. She didn’t say why he was put on leave.
Probably not a good sign.
* Bloomberg, of course, is the publication that published “The Big Hack” last October — a sensational story alleging that data centers of Apple, Amazon, and dozens of other companies were compromised by China’s intelligence services. The story presented no confirmable evidence at all, was vehemently denied by all companies involved, has not been confirmed by a single other publication (despite much effort to do so), and has been largely discredited by one of Bloomberg’s own sources. By all appearances “The Big Hack” was complete bullshit. Yet Bloomberg has issued no correction or retraction, and seemingly hopes we’ll all just forget about it. I say we do not just forget about it. Bloomberg’s institutional credibility is severely damaged, and everything they publish should be treated with skepticism until they retract the story or provide evidence that it was true.