By John Gruber
Due — never forget anything, ever again.
My thanks to Kolide for sponsoring last week at Daring Fireball. In the few short months since ChatGPT debuted, hundreds of AI-powered tools have come on the market. But while AI-based tools have genuinely helpful applications, they also pose profound security risks. Unfortunately, most companies still haven’t come up with policies to manage those risks. In the absence of clear guidance around responsible AI use, employees are blithely handing over sensitive data to untrustworthy tools.
AI-based browser extensions offer the clearest illustration of this phenomenon. The Chrome store is chock-a-block with extensions that (claim to) harness ChatGPT to do all manner of tasks: drafting emails, designing graphics, transcribing meetings, and writing code. But these tools are prone to at least three types of risk: malware, data governance, and prompt injection attacks.
Kolide is taking a two-part approach to governing AI use: allowing you to draft AI policies as a team, and using Kolide to block malicious tools. Visit Kolide’s website to learn more about how Kolide enforces device compliance for companies with Okta.
★ Monday, 23 October 2023