期刊:Lecture notes in information systems and organisation日期:2022-01-01卷期号:: 391-407
标识
DOI:10.1007/978-3-030-94617-3_27
摘要
With the rapid proliferation of the use of artificial intelligence (AI) in organizations over the last decade, certain concerns arise regarding human rights, data security, privacy or other ethical issues that could be at stake due to the uncontrolled use of AI. However, concerns regarding transparency in the use of AI are not yet reflected in any standards for the disclosure of non-financial information, nor in current regulations. Voluntary disclosure of AI, being a novelty, is scarce, implies a lack of standardization and is limited above all to the financial, technology and telecommunications sectors. Therefore, the main objective of this paper is to seek consensus and to propose a set of relevant elements to structure the information on the use of AI by companies, to improve transparency, mitigate risks and demonstrate a real responsibility in its use. For the purposes of this study, a set of disclosure elements had been proposed based on multi stakeholder approach with the collaboration between the New Technologies Commission of AECA and the BIDA Observatory. The final proposal has been validated by online questionnaires and includes a guide to the general information elements (AI governance model; Ethics and responsibility; Strategy) as well as more specific disclosure requirements for each medium–high risk automated decision making (ADM) systems. Thus, this research attempts to contextualize the development of artificial intelligence reporting standards.KeywordsArtificial intelligenceDisclosureStandardization