What's Missing Between LLMs and AGI - Vishal Misra & Martin Casado
<p>Vishal Misra returns to explain his latest research on how LLMs actually work under the hood. He walks through experiments showing that transformers update their predictions in a precise, mathematically predictable way as they process new information, explains why this still doesn't mean they're conscious, and describes what's actually required for AGI: the ability to keep learning after training and the move from pattern matching to understanding cause and effect.</p> <p> </p> <p>Resources:</p> <p>Follow Vishal Misra on X: <a href="https://x.com/vishalmisra" …
Tsy mbola voasoratra io fizarana io
Ampiasao ny STT.ai hanoratana ity fizarana ity amin'ny AI. Azonao atao ny mahazo lahatsoratra marina miaraka amin'ny famantarana ny mpiteny, ny famantarana ny fotoana, ary ny fanondranana amin'ny lamina maro.