作者
Krzysztof Wach,Cong Doanh Duong,Joanna Ejdys,Rūta Kazlauskaitė,Paweł Korzyński,Grzegorz Mazurek,Joanna Paliszkiewicz,Ewa Ziemba
摘要
Objective: The objective of the article is to provide a comprehensive identification and understanding of the challenges and opportunities associated with the use of generative artificial intelligence (GAI) in business.This study sought to develop a conceptual framework that gathers the negative aspects of GAI development in management and economics, with a focus on ChatGPT. Research Design & Methods:The study employed a narrative and critical literature review and developed a conceptual framework based on prior literature.We used a line of deductive reasoning in formulating our theoretical framework to make the study's overall structure rational and productive.Therefore, this article should be viewed as a conceptual article that highlights the controversies and threats of GAI in management and economics, with ChatGPT as a case study.Findings: Based on the conducted deep and extensive query of academic literature on the subject as well as professional press and Internet portals, we identified various controversies, threats, defects, and disadvantages of GAI, in particular ChatGPT.Next, we grouped the identified threats into clusters to summarize the seven main threats we see.In our opinion they are as follows: (i) no regulation of the AI market and urgent need for regulation, (ii) poor quality, lack of quality control, disinformation, deepfake content, algorithmic bias, (iii) automationspurred job losses, (iv) personal data violation, social surveillance, and privacy violation, (v) social manipulation, weakening ethics and goodwill, (vi) widening socio-economic inequalities, and (vii) AI technostress.Implications & Recommendations: It is important to regulate the AI/GAI market.Advocating for the regulation of the AI market is crucial to ensure a level playing field, promote fair competition, protect intellectual property rights and privacy, and prevent potential geopolitical risks.The changing job market requires workers to continuously acquire new (digital) skills through education and retraining.As the training of AI systems becomes a prominent job category, it is important to adapt and take advantage of new opportunities.To mitigate the risks related to personal data violation, social surveillance, and privacy violation, GAI developers must prioritize ethical considerations and work to develop systems that prioritize user privacy and security.To avoid social manipulation and weaken ethics and goodwill, it is important to implement responsible AI practices and ethical guidelines: transparency in data usage, bias mitigation techniques, and monitoring of generated content for harmful or misleading information.Contribution & Value Added: This article may aid in bringing attention to the significance of resolving the ethical and legal considerations that arise from the use of GAI and ChatGPT by drawing attention to the controversies and hazards associated with these technologies.