TWIML AI Podcast
TWIML AI Podcast

Why Vision Language Models Ignore What They See with Munawar Hayat - #758

Dec 09, 2025 · 57m

In this episode, we’re joined by Munawar Hayat, researcher at Qualcomm AI Research, to discuss a series of papers presented at NeurIPS 2025 focusing on multimodal and generative AI. We dive into the persistent challenge of object hallucination in Vision-Language Models (VLMs), why models often discard visual information in favor of pre-trained language priors, and how his team used attention-guided alignment to enforce better visual grounding. We also explore a novel approach to generalized contrastive learning designed to solve complex, …

ʻAʻole i kākau ʻia kēia ʻanuʻu

Hoʻohana i STT.ai e hoʻololi i kēia ʻāpana me AI. E loaʻa i ka huaʻōlelo pololei me ka ʻike ʻana i ka mea kākau, nā manawa, a me ka hoʻouna ʻana i nā ʻano like ʻole.

Ka hōʻike leo Ka manawa o ka pae hua'ōlelo Hoʻouna i ka SRT, TXT, JSON

He nui aku nā hōʻike