Speaker Detection & Diarization

Automatically identify and label different speakers in your audio and video transcriptions. Know exactly who said what.

How it works →
Client-side encryption on — your transcript will be encrypted in your browser before being stored. The server processes your audio for transcription, then the result is encrypted locally with your key before saving. (All data is always encrypted via HTTPS in transit.)
Speed varies by platform. Some transcripts are ready in seconds, others may take a few minutes depending on video length.
Drop file here or click to browse
MP3, WAV, M4A, FLAC, MP4, MKV, MOV, WebM — up to 2GB
Recording: 0:00
Real-time Vosk (instant)
Enhanced Whisper (accurate)
Public links: 24h, text only · Sign up for 7d + audio · Pro for private links

Real-time speech to text. AI auto-corrects as you speak — accuracy improves with longer speech.

Test your microphone first
❤️ Love STT.ai? Tell your friends!
You've used your free transcriptions

Sign up for free to get 600 minutes/month, or upgrade for unlimited transcriptions.

10 free min/day 600 min free with signup No credit card Encrypted
Sign up free →

What is Speaker Diarization?

Speaker diarization is the process of partitioning an audio stream into segments according to the identity of the speaker. In simpler terms, it answers the question "who spoke when?" This is essential for multi-speaker recordings like meetings, interviews, podcasts, conference calls, and legal proceedings where knowing who said what is just as important as what was said.

STT.ai uses advanced neural speaker diarization models that can detect and label speakers in real time. The system creates speaker embeddings -- numerical representations of each voice's unique characteristics -- and clusters them to distinguish between different people. This works even when speakers have similar voices or frequently interrupt each other.

How Speaker Detection Works

1. Voice Activity Detection

The system first identifies which segments of audio contain speech versus silence, music, or background noise.

2. Speaker Embedding

Each speech segment is converted into a speaker embedding -- a compact vector that captures the unique vocal characteristics of the speaker.

3. Clustering & Labeling

Embeddings are clustered to group segments from the same speaker together, then each cluster is assigned a label (Speaker 1, Speaker 2, etc.).

Use Cases for Speaker Detection

Meeting Transcription
Automatically label each participant in meeting recordings. Generate minutes with clear attribution of who said what.
Podcast Transcription
Distinguish between host and guests in podcast episodes. Create show notes with proper speaker attribution.
Interview Transcription
Separate interviewer and interviewee responses for research, journalism, and hiring documentation.
Legal & Compliance
Create official records of depositions, hearings, and compliance calls with clear speaker identification.

Speaker Detection on STT.ai

Speaker detection is available on all paid plans. When you transcribe audio or video with speaker detection enabled, the transcript will include speaker labels inline with the text. You can also export speaker-labeled transcripts in all supported formats including SRT, VTT, DOCX, JSON, and PDF.

Speaker 1 [00:00:01]: Welcome to the meeting, everyone. Let's start with the quarterly review. Speaker 2 [00:00:05]: Thanks. I have the numbers ready. Revenue is up 23% quarter over quarter. Speaker 1 [00:00:12]: That's great news. Can you walk us through the breakdown?

The system can detect up to 20 distinct speakers in a single recording. For best results, ensure each speaker has at least a few seconds of solo speech. Overlapping speech is handled but may reduce accuracy in heavily cross-talked segments.

Try speaker detection now

Upload a multi-speaker recording and see speakers automatically labeled.

Start Transcribing Free

Frequently Asked Questions

Upload your audio or video file to STT.ai. Select your preferred AI model and options, then click Transcribe. Your transcript will be ready in minutes. Export as TXT, SRT, VTT, DOCX, JSON, or PDF.

Yes! STT.ai offers 600 free minutes per month for all users. No signup required for your first transcription. Paid plans with more minutes and features start at $5/month.

Accuracy depends on the AI model you choose and audio quality. Our best models achieve a 5-7% Word Error Rate on benchmarks, meaning 93-95%+ accuracy. Clear audio with minimal background noise produces the best results.

STT.ai offers 10+ models including Whisper Large V3, NVIDIA Canary, and more. You can compare results from different models on the same file.

Yes. After transcribing, export your transcript as SRT or VTT subtitle files. These work with YouTube, Vimeo, and all major video platforms.

Yes. STT.ai automatically identifies and labels different speakers using AI speaker diarization. Works across all models and languages.

Most files are transcribed in under 5 minutes. A 1-hour audio file typically takes 2-3 minutes with our fastest models.

STT.ai supports 20+ audio and video formats including MP3, WAV, M4A, FLAC, OGG, MP4, MKV, MOV, WebM, and AVI. Export as TXT, SRT, VTT, DOCX, JSON, or PDF.

Yes. Audio files are processed and deleted after transcription. Your data is never used for training. Client-side encryption is free on all plans — it encrypts stored transcripts with a key only you have. During processing, the server handles your audio in plaintext. Learn about our security.

Yes. STT.ai offers a REST API with Python and Node.js SDKs. Free tier includes 100 minutes/month.

Yes. STT.ai includes a built-in transcript editor where you can correct errors, rename speakers, and adjust timestamps.

Every transcript gets a unique shareable link. Export to DOCX or PDF for email. Pro plans offer password-protected and permanent links.