a16z Podcast
a16z Podcast

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

Mar 17, 2026 · 47m

<p>Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect.</p> <p> </p> <p>Resources:</p> <p>Follow Vishal Misra on X: <a href="https://x.com/vishalmisra" …

Бу эпизод ҳали транскрипция қилинмаган

Бу эпизодни AI билан транскрипция қилиш учун STT.ai ни ишлатинг. Тоза матн, сўзловчини аниқлаш, вақт белгилари ва кўп форматларга экспорт қилиш.

Сўҳбатдошни аниқлаш Калима даражасидаги вақт белгилари SRT, TXT, JSON кўринишида экспорт қилиш

Кўпроқ қисмлар