AI incidents, audits, and the limits of benchmarks
<p>AI is moving fast from research to real-world deployment, and when things go wrong, the consequences are no longer hypothetical. In this episode, Sean McGregor, co-founder of the AI Verification & Evaluation Research Institute and also the founder of the AI Incident Database, joins Chris and Dan to discuss AI safety, verification, evaluation, and auditing. They explore why benchmarks often fall short, what red-teaming at DEFCON reveals about machine learning risks, and how organizations can better assess and manage AI …
Šī epizode vēl nav pārrakstīta
Izmanto STT.ai, lai translatorētu šo epizodi ar AI. Iegūstiet precīzu tekstu ar skaļrunis detektēšanu, laika zīmogus un eksportējiet vairākos formātos.