The AI Podcast (NVIDIA)
The AI Podcast (NVIDIA)

Lowering the Cost of Intelligence With NVIDIA's Ian Buck - Ep. 284

Dec 29, 2025 · 38m

Discover how mixture‑of‑experts (MoE) architecture is enabling smarter AI models without a proportional increase in the required compute and cost. Using vivid analogies and real-world examples, NVIDIA’s Ian Buck breaks down MoE models, their hidden complexities, and why extreme co-design across compute, networking, and software is essential to realizing their full potential. Learn more: https://blogs.nvidia.com/blog/mixture-of-experts-frontier-models/

Tsy mbola voasoratra io fizarana io

Ampiasao ny STT.ai hanoratana ity fizarana ity amin'ny AI. Azonao atao ny mahazo lahatsoratra marina miaraka amin'ny famantarana ny mpiteny, ny famantarana ny fotoana, ary ny fanondranana amin'ny lamina maro.

Famantarana ny mpandahateny Famaritana ny fotoana amin'ny ambaratongan'ny teny Manafatra amin'ny endrika SRT, TXT, JSON

Fitantarana bebe kokoa