摘要
Abstract In light of the rise of generative AI and recent debates about the socio‐political implications of large‐language models, chatbots, and the like, this paper analyzes the E.U.’s Artificial Intelligence Act (AIA), the world's first comprehensive attempt by a government body to address and mitigate the potentially negative impacts of AI technologies. The paper critically analyzes the AIA from a business and computer ethics point of view—a perspective currently lacking in the academic (e.g., GBOE‐related) literature. It evaluates, in particular, the AIA's strengths and weaknesses and proposes reform measures that could help to strengthen the AIA. Among the AIA's strengths are its legally binding character, extra‐territoriality, ability to address data quality and discrimination risks, and institutional innovations such as the AI Board and publicly accessible logs and database for AI systems. Among its main weaknesses are its lack of effective enforcement, oversight, and control, absence of procedural rights and remedy mechanisms, inadequate worker protection, institutional ambiguities, insufficient funding and staffing, and inadequate consideration of sustainability issues. Reform suggestions include establishing independent conformity assessment procedures, strengthening democratic accountability and judicial oversight, introducing redress and complaint mechanisms, ensuring the participation and inclusion of workers, guaranteeing political independence of the AI Board, providing enhanced funding and staffing of market surveillance authorities, and mandating “green AI.”