a16z Podcast
a16z Podcast

What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado

Mar 17, 2026 · 47m

<p>Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect.</p> <p> </p> <p>Resources:</p> <p>Follow Vishal Misra on X: <a href="https://x.com/vishalmisra" …

Šis epizodas dar nebuvo tranzitinis

Naudokite STT.ai, kad perrašytumėte šį epizodą su AI. Gaukite tikslų tekstą su garsiakalbio aptikimu, žymomis ir eksportuokite keliais formatais.

Garsiakalbio nustatymas Žodžių lygio žymos Eksportuoti kaip SRT, TXT, JSON

Daugiau epizodų