人性
担心
心理学
环境伦理学
精神分析
政治学
社会心理学
哲学
法学
精神科
焦虑
标识
DOI:10.1080/00963402.2023.2245242
摘要
ABSTRACTAdvances in artificial intelligence (AI) have prompted extensive and public concerns about this technology’s capacity to contribute to the spread of misinformation, algorithmic bias, and cybersecurity breaches and to pose, potentially, existential threats to humanity. We suggest that although these threats are both real and important to address, the heightened attention to AI’s harms has distracted from human beings’ outsized role in perpetuating these same harms. We suggest the need to recalibrate standards for judging the dangers of AI in terms of their risks relative to those of human beings. Further, we suggest that, if anything, AI can aid human beings in decision making aimed at improving social equality, safety, productivity, and mitigating some existential threats.KEYWORDS: Artificial intelligenceexistential riskalgorithmic biasethicscybersecuritynuclear decision making Disclosure statementNo potential conflict of interest was reported by the author(s).FundingMoran Cerf received funding from the Carnegie Corporation of New York (Grant ID: G-19-57248). Adam Waytz received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.FundingMoran Cerf received funding from the Carnegie Corporation of New York (Grant ID: G-19-57248). Adam Waytz received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.Notes1. See: https://futureoflife.org/open-letter/pause-giant-ai-experiments/.2. See: https://www.safe.ai/statement-on-ai-risk.3. See: https://www.nytimes.com/2022/10/30/business/musk-tweets-hillary-clinton-pelosi-husband.html.4. See: https://www.forbes.com/sites/kenrickcai/2023/06/04/stable-diffusion-emad-mostaque-stability-ai-exaggeration/?sh=2bd38c3075c5.5. See: https://www.ft.com/content/06b22337-e862-43e5–8440-d9c225e0c18d.6. See: https://www.bbc.com/news/technology-45809919.7. See: https://www.nytimes.com/2019/12/06/business/algorithm-bias-fix.html.8. See: https://hbr.org/2023/04/the-new-risks-chatgpt-poses-to-cybersecurity.9. See: https://www.forbes.com/sites/tonybradley/2023/02/27/defending-against-generative-ai-cyber-threats/?sh=c62032c10884.10. See: https://sloanreview.mit.edu/article/from-chatgpt-to-hackgpt-meeting-the-cybersecurity-threat-of-generative-ai/.11. See: https://thehackernews.com/2021/02/why-human-error-is-1-cyber-security.html.12. See: https://www.userlike.com/en/blog/consumer-chatbot-perceptions.13. See: https://nypost.com/2023/02/14/the-internet-is-ruining-teens-cdc-report-is-the-latest-proof/.14. See: https://www.brookings.edu/articles/how-tech-platforms-fuel-u-s-political-polarization-and-what-government-can-do-about-it/.15. See: https://www.theatlantic.com/ideas/archive/2022/07/social-media-harm-facebook-meta-response/670975/.Additional informationFundingMoran Cerf received funding from the Carnegie Corporation of New York (Grant ID: G-19-57248). Adam Waytz received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.Notes on contributorsMoran CerfMoran Cerf is a neuroscientist and professor of business at Columbia University and a former cybersecurity expert. As a recipient of the Carnegie fellowship, he works on the applications of neuroscience and AI in nuclear decision making.Adam WaytzAdam Waytz is a professor of management and organizations at the Kellogg School of Management at Northwestern University and has consulted with Google on its chatbot, Bard.
科研通智能强力驱动
Strongly Powered by AbleSci AI