TWIML AI Podcast
Why Vision Language Models Ignore What They See with Munawar Hayat - #758
Dec 09, 2025
· 57m
In this episode, we’re joined by Munawar Hayat, researcher at Qualcomm AI Research, to discuss a series of papers presented at NeurIPS 2025 focusing on multimodal and generative AI. We dive into the persistent challenge of object hallucination in Vision-Language Models (VLMs), why models often discard visual information in favor of pre-trained language priors, and how his team used attention-guided alignment to enforce better visual grounding. We also explore a novel approach to generalized contrastive learning designed to solve complex, …
See episood ei ole veel transkribeeritud.
Kasuta STT.ai, et transkribeerida see episood AI. Hangi täpne tekst kõlari tuvastamine, ajatemplid, ja eksportida mitmes vormis.
Kõlari tuvastamine
Sõnataseme ajatemplid
Eksport kui SRT, TXT, JSON