Recurrence and Attention for Long-Context Transformers with Jacob Buckman - #750
Today, we're joined by Jacob Buckman, co-founder and CEO of Manifest AI to discuss achieving long context in transformers. We discuss the bottlenecks of scaling context length and recent techniques to overcome them, including windowed attention, grouped query attention, and latent space attention. We explore the idea of weight-state balance and the weight-state FLOP ratio as a way of reasoning about the optimality of compute architectures, and we dig into the Power Retention architecture, which blends the parallelization of attention …
Questo episodio non è stato ancora trascritto
Usa STT.ai per trascrivere questo episodio con AI. Ottieni un testo accurato con rilevamento altoparlanti, timestamp ed esportazione in più formati.