Legal Liability for Unavoidable AI Harm Should Depend on Explainability

Research output: Other Conference ContributionsPresentation

Abstract

Despite significant advancements in AI, unavoidable risks, such as specification gaming and hallucinations, remain as inherent features of these systems. Current regulatory frameworks, including initiatives like the EU AI Act, focus on risk prevention but fail to adequately address liability for harms caused by these unavoidable risks. To address this gap, we developed a game-theoretic model that examines the optimal liability framework for AI developers. Our model proposes a dynamic liability regime that incentivizes developers to invest in explainability practices. Under this framework, liability exposure decreases as developers demonstrate higher levels of explainability, thereby creating a direct economic incentive for improving interpretability. The regime links liability to explainability benchmarking, allowing courts to evaluate whether harm was truly unavoidable or attributable to deficiencies in the system design. The framework we advocate for is flexible and adaptive, relying on industry-driven benchmarking standards to ensure that liability rules evolve alongside technological advancements.
Original languageEnglish
Publication statusPublished - 3 Jul 2025
Externally publishedYes
EventThe 7th International Conference on Public Policy - Chiang Mai, Thailand
Duration: 2 Jul 20254 Sept 2025

Conference

ConferenceThe 7th International Conference on Public Policy
Abbreviated titleICPP7
Country/TerritoryThailand
CityChiang Mai
Period2/07/254/09/25

Fingerprint

Dive into the research topics of 'Legal Liability for Unavoidable AI Harm Should Depend on Explainability'. Together they form a unique fingerprint.

Cite this