Practical AI
Practical AI

Controlling AI Models from the Inside

Jan 20, 2026 · 43m

<p>As generative AI moves into production, traditional guardrails and input/output filters can prove too slow, too expensive, and/or too limited. In this episode, Alizishaan Khatri of Wrynx joins Daniel and Chris to explore a fundamentally different approach to AI safety and interpretability. They unpack the limits of today’s black-box defenses, the role of interpretability, and how model-native, runtime signals can enable safer AI systems. </p><p>Featuring:</p><ul><li>Alizishaan Khatri – <a href="https://www.linkedin.com/in/alizishaan-khatri-32a20637/">LinkedIn</a></li><li>Chris Benson – <a href="https://chrisbenson.com/">Website</a>, <a href="https://www.linkedin.com/in/chrisbenson">LinkedIn</a>, <a href="https://bsky.app/profile/chrisbenson.bsky.social">Bluesky</a>, <a href="https://github.com/chrisbenson">GitHub</a>, <a …

Šī epizode vēl nav pārrakstīta

Izmanto STT.ai, lai translatorētu šo epizodi ar AI. Iegūstiet precīzu tekstu ar skaļrunis detektēšanu, laika zīmogus un eksportējiet vairākos formātos.

Skaļruņa atklāšana Vārda līmeņa laika zīmogi Eksports kā SRT, TXT, JSON

Vairāk epizožu