By John Gruber
WorkOS: APIs to ship SSO, SCIM, FGA, and User Management in minutes. Check out their launch week.
Fascinating report from Joseph Cox for Motherboard:
The call came from PayPal’s fraud prevention system. Someone had tried to use my PayPal account to spend $58.82, according to the automated voice on the line. PayPal needed to verify my identity to block the transfer.
“In order to secure your account, please enter the code we have sent your mobile device now,” the voice said. PayPal sometimes texts users a code in order to protect their account. After entering a string of six digits, the voice said, “Thank you, your account has been secured and this request has been blocked.”
“Don’t worry if any payment has been charged to your account: we will refund it within 24 to 48 hours. Your reference ID is 1549926. You may now hang up,” the voice said.
But this call was actually from a hacker. The fraudster used a type of bot that drastically streamlines the process for hackers to trick victims into giving up their multi-factor authentication codes or one-time passwords (OTPs) for all sorts of services, letting them log in or authorize cash transfers.
Here’s the gist of how the bots work.
The bot calls you, the victim, using a faked Caller ID. So the Caller ID the victim sees might say something like “PayPal Inc.” or “Bank of America”.
The bots sound robotic and automated. There’s no uncanny valley here — the bots are clearly bots. But a lot of legitimate voice-driven phone systems sound like bots. We’ve normalized talking to robots on the phone, and these bots are taking advantage of that. That these bots sound obviously robotic is a feature, not a bug. Cox has a recording of one such bot in his report.
The bot triggers an actual 2FA code to be sent to the victim. Let’s say the crooks know (or just guess) your email address and password for the service they’re targeting — quite possibly because the email/password combination appeared in one of the many major data leaks in recent years. When the call starts, the bot enters the target’s email address and password on the site, which results in the site sending a 2FA code via SMS to the victim.
Now the bot, on the phone, says that to complete this “security verification” or whatever, just enter the code they just sent you via text. PayPal — or your bank, or Amazon, or whoever — actually did just text you a code. The call is fraudulent but the SMS message was legit. But if you give the legit code to the fraudulent bot, boom, now the bot has the 2FA code needed to actually go into your account and steal your money.
This is devilishly simple, and you can see how it’s effective. According to Cox, some of these bots also target authentication codes from apps like Google Authenticator or Authy. The bot just asks you to keypress the current code from your app.
The other thing that intrigues me about this whole scheme is that the interface to these bots — meaning, the interface a human criminal uses to interact with the bots — is entirely text-based, going through a service like Telegram or Discord. That makes sense, but it also feels decidedly old-school — like the sort of terminal-based interfaces for “games” my friends and I would write in BASIC decades ago. “Type Y for this or type N for that; enter victim’s bank name now” — that sort of thing. Again, Cox illustrates this copiously in his article, including with a video showing a bot’s interface in action. As is so often the case, the simplest possible thing often works the most reliably.
★ Thursday, 4 November 2021