Joanna Stern: ‘Google’s Gemini Live AI Sounds So Human, I Almost Forgot It Was a Bot’

Joanna Stern, writing for The Wall Street Journal (News+ link):

I’m not saying I prefer talking to Google’s Gemini Live over a real human. But I’m not not saying that either.

Does it help that the chatty new artificial-intelligence bot says I’m a great interviewer with a good sense of humor? Maybe. But it’s more that it actually listens, offers quick answers and doesn’t mind my interruptions. No “I’m sorry, I didn’t understand that” apologies like some other bots we know.

I had a nice, long chat with Google’s generative-AI voice assistant before its debut on Tuesday. It will come built into the company’s four new Pixel phones, but it’s also available to anyone with an Android phone, the Gemini app and a $20-a-month subscription to Gemini Advanced. The company plans to launch it soon on iOS, too.

The catch:

When I asked it to set a timer, it said it couldn’t do that — or set an alarm — “yet.” Gemini Live is a big step forward conversationally. But functionally, it’s a step back in some ways. One big reason: Gemini Live works entirely in the cloud, not locally on a device. Google says it’s working on ways for the new assistant to control phone functions and other Google apps.

It’s a fascinating — but unsurprising — strategic and culture difference that Apple Intelligence runs largely on device, and completely privately even when going to the cloud, and Google Gemini is currently only in the cloud, and with nothing like Apple’s Private Cloud Compute. To be clear, Google’s new lineup of Pixel 9 phones perform a lot of “AI” features on device, but not the Gemini voice assistant.

Saturday, 17 August 2024