补语(音乐)
责任
业务
法律与经济学
计算机安全
法学
互联网隐私
政治学
计算机科学
会计
经济
化学
生物化学
互补
基因
表型
标识
DOI:10.1016/j.clsr.2024.106012
摘要
Who should compensate you if you get hit by a car in "autopilot" mode: the safety driver or the car manufacturer? What about if you find out you were unfairly discriminated against by an AI decision-making tool that was being supervised by an HR professional? Should the developer compensate you, the company that procured the software, or the (employer of the) HR professional that was "supervising" the system's output? These questions do not have easy answers. In the European Union and elsewhere around the world, AI governance is turning towards risk regulation. Risk regulation alone is, however, rarely optimal. The situations above all involve the liability for harms that are caused by or with an AI system. While risk regulations like the AI Act regulate some aspects of these human and machine interactions, they do not offer those impacted by AI systems any rights and little avenues to seek redress. From a corrective justice perspective risk regulation must also be complemented by liability law because when harms do occur, harmed individuals should be compensated. From a risk-prevention perspective, risk regulation may still fall short of creating optimal incentives for all parties to take precautions. Because risk regulation is not enough, scholars and regulators around the world have highlighted that AI regulations should be complemented by liability rules to address AI harms when they occur. Using a law and economics framework this Article examines how the recently proposed AI liability regime in the EU – a revision of the Product Liability Directive, and an AI Liability effectively complement the AI Act and how they address the particularities of AI-human interactions.
科研通智能强力驱动
Strongly Powered by AbleSci AI