适应(眼睛)
人类智力
通用人工智能
模糊逻辑
大数据
控制(管理)
心理学
人工智能
计算机科学
数据挖掘
神经科学
作者
Andreas Kaplan,Michael Haenlein
标识
DOI:10.1016/j.bushor.2018.08.004
摘要
Artificial intelligence (AI)—defined as a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation—is a topic in nearly every boardroom and at many dinner tables. Yet, despite this prominence, AI is still a surprisingly fuzzy concept and a lot of questions surrounding it are still open. In this article, we analyze how AI is different from related concepts, such as the Internet of Things and big data, and suggest that AI is not one monolithic term but instead needs to be seen in a more nuanced way. This can either be achieved by looking at AI through the lens of evolutionary stages (artificial narrow intelligence, artificial general intelligence, and artificial super intelligence) or by focusing on different types of AI systems (analytical AI, human-inspired AI, and humanized AI). Based on this classification, we show the potential and risk of AI using a series of case studies regarding universities, corporations, and governments. Finally, we present a framework that helps organizations think about the internal and external implications of AI, which we label the Three C Model of Confidence, Change, and Control.
科研通智能强力驱动
Strongly Powered by AbleSci AI