By John Gruber
Upgraded — Get a new MacBook every two years. From $36.06/month with AppleCare+ included.
New open letter from current and former researchers at OpenAI and Google DeepMind:
AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.
So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues. Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated. Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry.
The 7 named signers are all former OpenAI or Google DeepMind employees. The 6 anonymous signers are all currently at OpenAI.
See also: Techmeme’s roundup of coverage and commentary.
★ Tuesday, 4 June 2024