TWIML AI Podcast
TWIML AI Podcast

Recurrence and Attention for Long-Context Transformers with Jacob Buckman - #750

Oct 07, 2025 · 57m

Today, we're joined by Jacob Buckman, co-founder and CEO of Manifest AI to discuss achieving long context in transformers. We discuss the bottlenecks of scaling context length and recent techniques to overcome them, including windowed attention, grouped query attention, and latent space attention. We explore the idea of weight-state balance and the weight-state FLOP ratio as a way of reasoning about the optimality of compute architectures, and we dig into the Power Retention architecture, which blends the parallelization of attention …

ʻAʻole i kākau ʻia kēia ʻanuʻu

Hoʻohana i STT.ai e hoʻololi i kēia ʻāpana me AI. E loaʻa i ka huaʻōlelo pololei me ka ʻike ʻana i ka mea kākau, nā manawa, a me ka hoʻouna ʻana i nā ʻano like ʻole.

Ka hōʻike leo Ka manawa o ka pae hua'ōlelo Hoʻouna i ka SRT, TXT, JSON

He nui aku nā hōʻike