TWIML AI Podcast
TWIML AI Podcast

Why Vision Language Models Ignore What They See with Munawar Hayat - #758

Dec 09, 2025 · 57m

In this episode, we’re joined by Munawar Hayat, researcher at Qualcomm AI Research, to discuss a series of papers presented at NeurIPS 2025 focusing on multimodal and generative AI. We dive into the persistent challenge of object hallucination in Vision-Language Models (VLMs), why models often discard visual information in favor of pre-trained language priors, and how his team used attention-guided alignment to enforce better visual grounding. We also explore a novel approach to generalized contrastive learning designed to solve complex, …

Šī epizode vēl nav pārrakstīta

Izmanto STT.ai, lai translatorētu šo epizodi ar AI. Iegūstiet precīzu tekstu ar skaļrunis detektēšanu, laika zīmogus un eksportējiet vairākos formātos.

Skaļruņa atklāšana Vārda līmeņa laika zīmogi Eksports kā SRT, TXT, JSON

Vairāk epizožu