Lowering the Cost of Intelligence With NVIDIA's Ian Buck - Ep. 284
Discover how mixture‑of‑experts (MoE) architecture is enabling smarter AI models without a proportional increase in the required compute and cost. Using vivid analogies and real-world examples, NVIDIA’s Ian Buck breaks down MoE models, their hidden complexities, and why extreme co-design across compute, networking, and software is essential to realizing their full potential. Learn more: https://blogs.nvidia.com/blog/mixture-of-experts-frontier-models/
ལས་རིམ་འདི་ད་ལྟ་ར་ཡིག་སྒྱུར་མ་བྱས་ཡོད།
Use STT.ai to transcribe this episode with AI. Get accurate text with speaker detection, timestamps, and export in multiple formats.