TWIML AI Podcast
Why Vision Language Models Ignore What They See with Munawar Hayat - #758
Dec 09, 2025
· 57m
In this episode, we’re joined by Munawar Hayat, researcher at Qualcomm AI Research, to discuss a series of papers presented at NeurIPS 2025 focusing on multimodal and generative AI. We dive into the persistent challenge of object hallucination in Vision-Language Models (VLMs), why models often discard visual information in favor of pre-trained language priors, and how his team used attention-guided alignment to enforce better visual grounding. We also explore a novel approach to generalized contrastive learning designed to solve complex, …
ບົດນີ້ຍັງບໍ່ທັນໄດ້ຂຽນເປັນພາສາລາວເທື່ອ
ໃຊ້ STT.ai ເພື່ອແປບົດນີ້ດ້ວຍ AI. ໄດ້ຮັບຂໍ້ຄວາມທີ່ຖືກຕ້ອງກັບການກວດພົບຜູ້ເວົ້າ, ເວລາແລະສົ່ງອອກເປັນຮູບແບບຫຼາຍຮູບແບບ.
ກວດພົບຜູ້ເວົ້າ
ເວລາທີ່ໄດ້ຕັ້ງຄ່າໃນລະດັບຄໍາ
ສົ່ງອອກເປັນ SRT, TXT, JSON