TWIML AI Podcast
Why Vision Language Models Ignore What They See with Munawar Hayat - #758
Dec 09, 2025
· 57m
In this episode, we’re joined by Munawar Hayat, researcher at Qualcomm AI Research, to discuss a series of papers presented at NeurIPS 2025 focusing on multimodal and generative AI. We dive into the persistent challenge of object hallucination in Vision-Language Models (VLMs), why models often discard visual information in favor of pre-trained language priors, and how his team used attention-guided alignment to enforce better visual grounding. We also explore a novel approach to generalized contrastive learning designed to solve complex, …
An kasa rubuta wannan sashe har yanzu
Yi amfani da STT.ai don rubuta wannan sashe tare da AI. Ka sami rubutu mai kyau tare da gano mai magana, da alamun lokaci, da fitarwa cikin sifofi da yawa.
@ action
@ action
@ action