By John Gruber
Upgraded — Get a new MacBook every two years. From $36.06/month with AppleCare+ included.
Now, don’t get offended, but — you aren’t as good at clocking deepfakes as you think you are.
And it’s not just you — nobody’s that good at it. Not your mom, or your boss, or anyone in your IT department.
To make matters worse, you probably think you can spot a fake. After all, you see weird AI-generated videos of celebrities on social media and they give you that uncanny valley tingle. But it’s a different ballgame when all you’ve got to go on is a voice.
In real life, people only catch voice clones about 50% of the time. You might as well flip a coin.
And that makes us extremely vulnerable to attacks.
In the “classic” voice clone scam, the caller is after an immediate payout (“Hi it’s me, your boss. Wire a bunch of company money to this account ASAP”). Then there are the more complex social engineering attacks, where a phone call is just the entryway to break into a company’s systems and steal data or plant malware (that’s what happened in the MGM attack, albeit without the use of AI).
As more and more hackers use voice cloning in social engineering attacks, deepfakes are becoming such a hot-button issue that it’s hard to tell the fear-mongering (for instance, it definitely takes more than three seconds of audio to clone a voice) from the actual risk.
To disentangle the true risks from the exaggerations, we need to answer some basic questions:
Like a lot of modern technologies, deepfake attacks actually exploit some deep-seated fears. Fears like, “your boss is mad at you.” These anxieties have been used by social engineers since the dawn of the scam, and voice clones add a shiny new boost to their tactics.
But the good news is that we can be trained to look past those fears and recognize a suspicious phone call — even if the voice sounds just like someone we trust.
If you want to learn more about our findings, read our piece on the Kolide blog. It’s a frank and thorough exploration of what we should be worried about when it comes to audio deepfakes.
This RSS sponsorship ran on Monday, 29 April 2024.