1998年数据保护法
晋升(国际象棋)
立法机关
任务(项目管理)
培训(气象学)
计算机科学
质量(理念)
人工智能应用
钥匙(锁)
人工智能
政治学
工程伦理学
法学
工程类
计算机安全
管理
经济
认识论
物理
哲学
气象学
政治
标识
DOI:10.1080/17579961.2021.1977219
摘要
In response to recent regulatory initiatives at the EU level, this article shows that training data for AI do not only play a key role in the development of AI applications, but are currently only inadequately captured by EU law. In this, I focus on three central risks of AI training data: risks of data quality, discrimination and innovation. Existing EU law, with the new copyright exception for text and data mining, only addresses a part of this risk profile adequately. Therefore, the article develops the foundations for a discrimination-sensitive quality regime for data sets and AI training, which emancipates itself from the controversial question of the applicability of data protection law to AI training data. Furthermore, it spells out concrete guidelines for the re-use of personal data for AI training purposes under the GDPR. Ultimately, the legislative and interpretive task rests in striking an appropriate balance between individual protection and the promotion of innovation. The article finishes with an assessment of the proposal for an Artificial Intelligence Act in this respect.
科研通智能强力驱动
Strongly Powered by AbleSci AI