Google's New Gemini Model Talks Back in Real Time

March 31, 2026

Google's New Gemini Model Talks Back in Real Time

Published: March 31, 2026 at 12:31 AM

Updated: March 31, 2026 at 12:31 AM

100-word summary

Google just launched gemini-3.1-flash-live-preview, a model built for audio-to-audio conversations that happen live. No transcription step, no lag between you finishing a sentence and the AI responding. It streams audio in both directions simultaneously, like an actual phone call. The model is available now through Google AI Studio. What can you actually do? Build voice assistants that interrupt themselves when corrected, or demo prototypes that respond to spoken questions before slides advance. The catch: Google's privacy notices apply to everything said, and preview models have a habit of disappearing when the company moves on to the next version.

What happened

Google just launched gemini-3.1-flash-live-preview, a model built for audio-to-audio conversations that happen live. No transcription step, no lag between you finishing a sentence and the AI responding. It streams audio in both directions simultaneously, like an actual phone call. The model is available now through Google AI Studio. What can you actually do? Build voice assistants that interrupt themselves when corrected, or demo prototypes that respond to spoken questions before slides advance.

Why it matters

The catch: Google's privacy notices apply to everything said, and preview models have a habit of disappearing when the company moves on to the next version.

Sources