By John Gruber
Stop political robocalls & texts with Nomorobo!
24% off with code DARINGFIREBALL24.
It seems like every company is scrambling to stake their claim in the AI goldrush — check out the CEO of Kroger promising to bring LLMs into the dairy aisle. And front line workers are following suit–experimenting with AI so they can work faster and do more.
In the few short months since ChatGPT debuted, hundreds of AI-powered tools have come on the market. But while AI-based tools have genuinely helpful applications, they also pose profound security risks. Unfortunately, most companies still haven’t come up with policies to manage those risks. In the absence of clear guidance around responsible AI use, employees are blithely handing over sensitive data to untrustworthy tools.
AI-based browser extensions offer the clearest illustration of this phenomenon. The Chrome store is overflowing with extensions that (claim to) harness ChatGPT to do all manner of tasks: punching up emails, designing graphics, transcribing meetings, and writing code. But these tools are prone to at least three types of risk.
Up until now, most companies have been caught flat-footed by AI, but these risks are too serious to ignore.
At Kolide, we’re taking a two-part approach to governing AI use.
Every company will have to craft policies based on their unique needs and concerns, but the important thing is to start now. There’s still time to seize the reins of AI, before it gallops away with your company’s data.
To learn more about how Kolide enforces device compliance for companies with Okta, click here to watch an on-demand demo.
This RSS sponsorship ran on Tuesday, 12 September 2023.