危害
政府(语言学)
困境
没有什么
政治学
人权
软件部署
工程伦理学
社会学
心理学
公共关系
法学
计算机科学
工程类
认识论
操作系统
语言学
哲学
出处
期刊:International Journal of Law and Information Technology
[Oxford University Press]
日期:2021-09-19
被引量:7
标识
DOI:10.1093/ijlit/eaab008
摘要
Abstract The debate on the ethical challenges of artificial intelligence (AI) is nothing new. Researchers and commentators have highlighted the deficiencies of AI technology regarding visible minorities, women, youth, seniors and indigenous people. Currently, there are several ethical guidelines and recommendations for AI. These guidelines provide ethical principles and humancentred values to guide the creation of responsible AI. Since these guidelines are non-binding, it has no significant effect. It is time to harness initiatives to regulate AI globally and incorporate human rights and ethical standards in AI creation. The government need to intervene, and discriminated groups should lend their voice to shape AI regulation to suit their circumstances. This study highlights the discriminatory and technological risks suffered by minority/marginalised groups owing to AI’s ethical dilemma. As a result, it recommends the guarded deployment of AI vigilantism to regulate the use of AI technologies and prevent harm arising from AI systems’ operations. The appointed AI vigilantes will comprise mainly persons/groups with an increased risk of their rights being disproportionately impacted by AI. It is a well-intentioned group that will work with the government to avoid abuse of powers.
科研通智能强力驱动
Strongly Powered by AbleSci AI