a16z Podcast
What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado
Mar 17, 2026
· 47m
<p>Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect.</p> <p> </p> <p>Resources:</p> <p>Follow Vishal Misra on X: <a href="https://x.com/vishalmisra" …
An kasa rubuta wannan sashe har yanzu
Yi amfani da STT.ai don rubuta wannan sashe tare da AI. Ka sami rubutu mai kyau tare da gano mai magana, da alamun lokaci, da fitarwa cikin sifofi da yawa.
@ action
@ action
@ action