Recognizing and Utilizing Novel Research Opportunities with Artificial Intelligence

心理学 组织行为学 人工智能 知识管理 过程管理 业务 计算机科学 社会心理学
作者
Georg von Krogh,Quinetta M. Roberson,Marc Gruber
出处
期刊:Academy of Management Journal [Academy of Management]
卷期号:66 (2): 367-373 被引量:47
标识
DOI:10.5465/amj.2023.4002
摘要

Academy of Management JournalVol. 66, No. 2 From the EditorsFree AccessRecognizing and Utilizing Novel Research Opportunities with Artificial IntelligenceGeorg von Krogh, Quinetta Roberson and Marc GruberGeorg von KroghEidgenössische Technische Hochschule Zürich (ETH Zurich), Quinetta RobersonMichigan State University and Marc GruberEcole Polytechnique Fédérale de Lausanne (EPFL)Published Online:19 Apr 2023https://doi.org/10.5465/amj.2023.4002AboutSectionsPDF/EPUB ToolsDownload CitationsAdd to favoritesTrack Citations ShareShare onFacebookTwitterLinkedInRedditEmail As we are witnessing a fundamental transformation of organizations, societies, and economies through the rapid growth of data and development of digital technology (George, Osinga, Lavie, & Scott, 2016), artificial intelligence (AI) has the potential to transform the management field. With the power to automatize, provide predictions of outcomes, and discover patterns in massive amounts of data (Iansiti & Lakhani, 2020), AI changes many aspects of contemporary organizing, including decision-making, problem-solving, and other processes (Bailey, Faraj, Hinds, Leonardi, & von Krogh, 2022). AI also enables firms with capabilities for offering new products and services, developing new business models, and connecting stakeholders. In line with these developments, AI is not only an interesting phenomenon to study in and around organizations (e.g., Krakowski, Luger, & Raisch, 2022; Tang et al., 2022; Tong, Jia, Luo, & Fang, 2021), but also offers management scholars a wealth of research opportunities in enlarging their methodological toolbox to leverage vast amounts and various types of data (e.g., Choudhury, Allen, & Endres, 2020; Vanneste & Gulati, 2022). In our quest to push scientific boundaries, we encourage authors to explore these opportunities within Academy of Management Journal (AMJ).While AI is a trending and “hot” topic discussed by policy-makers, business leaders, and scientists, understanding when and how machines display intelligent behaviors is at the center of modern AI scholarship (Nilsson, 1998). AI represents a broad field of technologies that display intelligent behaviors, including self-awareness, goal formulation, goal-directed action, reasoning, optimization, learning, and autonomous movements. While many readers will immediately recall the Terminator in the movie by the same name, or the rogue artificial psychopath (HAL) in the movie 2001: A Space Odyssey, which epitomizes the horrific efficiency of AI (Clarke, 1968/2000), computer scientists have long abandoned the idea of building machines that “mimic” human intelligence. Today, research focuses on exploring how aspects of artificial intelligence work in practice. As a subset of AI, machine learning (ML) has become a powerful means of discovering patterns in massive amounts of data and making predictions based on such intelligence (George, Haas, & Pentland, 2014; Hannah, Tidhar, & Eisenhardt, 2021). As management scholars enjoy unprecedented access to numerous new data sources offering vast amounts of data, we believe that ML may be of great value to scholars across the different divisions and interest groups of the Academy of Management. We also believe management research can offer important insights to the study of AI by enlarging the types and scope of data to which we expose ML.The primary purpose of this FTE is to support scholars in taking advantage of research opportunities in ML applications and to offer guidelines for work submitted to AMJ. Specifically, as ML may have broad appeal to management scholars, we discuss the associated benefits and challenges to be considered when engaging in this type of research. We also review recent advancements in AI that may help management scholars leverage the benefits of ML applications while dealing with some of the challenges of these methods. Finally, we offer a typology for conducting ML research in ways that build and test theory and advancing an understanding of management phenomena, consistent with papers published in AMJ.BENEFITS OF ML METHODSOne key benefit of ML to management scholars is its statistical flexibility, or the use of algorithms to fit complex functional forms to a data set (without over-fitting) that can also work well out-of-sample.1 Current ML algorithms fall into three broad categories: (1) reinforcement, (2) unsupervised, and (3) supervised. “Reinforcement learning” comprises algorithms that autonomously learn courses of action to reach a goal within an environment. For example, such algorithms form the backbone of self-driving vehicles that adapt to traffic patterns as they seek to reach a destination, or of advanced chess programs that learn and adapt based on opponents’ moves. “Unsupervised learning” algorithms learn to discover patterns in input data (often called “features”) without any specified target for learning. Such algorithms do not necessarily require annotated and curated data, which makes them very useful for clustering or dimensionality reduction. For example, unsupervised learning algorithms can support a researcher in identifying groups of firms adopting related business models, pursuing similar strategies within an industry (an initial step in strategic group analysis), or offering competing digital services on a platform based on a large and complex feature set. In contrast, “supervised learning” algorithms, which are most common in the social sciences, learn how to map associations between inputs (“features” X) and outputs (“targets” Y). For example, ML can be used with categorical variables to arrange data as they relate to the target, such as classifying credit card transactions as fraudulent or non-fraudulent based on the time and frequency of transactions, location, amount of payment, and other data. For continuous variables, supervised learning algorithms learn to perform complex regression tasks, such as predicting firm performance based on extensive firm and industry data.CHALLENGES OF ML METHODSBecause AI and its subset of ML-based methods involve considerable human effort and management, there are also associated ethical and technical challenges when applying them. First, limited access to certain types of data can challenge AI and ML methods. With a few exceptions, most ML algorithms only work effectively with large amounts of data. However, management scholars must grapple with matching the data to which they have access to data models. Since many firms consider data to be a strategic asset that must be protected through substantial technical and social mechanisms, the extent to which critical data are available may become decisive for the type of questions that drive a research agenda. Researchers may further be constrained by their ability to obtain structured or annotated data for training and evaluating models. While an unsupervised learning approach or manual annotation of a data set may be used to train models, such annotation is prone to errors when data grow. Therefore, access issues continue to create barriers to the use of ML methods and, therefore, to the examination of pertinent phenomena or development of relevant theories.Second, ML algorithms are subject to human biases that can become reinforcing, as they are reproduced in future data sets and models. Researchers are often biased toward selecting algorithms they know rather than those needed to solve the task at hand, which can produce faulty models that are fatal to accurate predictions or pattern discovery. For example, research highlights biases in ML-based candidate selection systems stemming from hidden biases embedded in data from previous candidate pools (Feuerriegel, Shrestha, von Krogh, & Zhang, 2022). Specifically, algorithms trained on White male candidate pools were shown to choose similar candidates and to discriminate against female and ethnic minority candidates. There is also a potential for ML to fall prey to biases resulting from data structures, as the selection of “correct” algorithms depends on the available data and how they are handled. For example, choosing a deep learning model over supervised learning when data are limited can produce faulty predictions. Biases may also derive from performance metrics to evaluate ML models that fail to detect class imbalances and instead, show high performance of target prediction—thus, creating an illusion of model accuracy relative to other models.Third, researchers must consider the costs associated with ML. In particular, massive and variegated data sets often make traditional ML approaches less efficient and pose important constraints for researchers in understanding such data. We also often hear scholars complaining that the intricate functioning of ML algorithms remains opaque. Of course, this perceived challenge can be addressed through education and learning on how algorithms work. However, in many cases, the amount of one’s own training, and of training data needed, makes it economically inefficient to study their detailed functioning. Thus, the required effort to engage in ML methods presents a challenge to researchers who do not regularly utilize this approach.Despite this range of challenges, recent developments in computer science can help management scholars deal with issues of access, bias, and cost. For example, to build powerful learning models, researchers may draw on data to which they have limited access. “Federated learning” is a technique to train models across a variety of data stored on separate servers without exchanging that data (Zhang, Xie, Bai, Yu, Li, & Gao, 2021), such as utilizing data from CCTVs to monitor and predict traffic patterns across several local conditions and without revealing sensitive personal information. Similarly, “transfer learning,” an ML method in which pre-trained models are used as starting points to model a new task (Niu, Liu, Wang, & Song, 2020), is also becoming increasingly important in industries where data protection is crucial, such as defense and pharmaceuticals. To address biases in ML, scientists have begun developing “explainable AI” that can insert explanation and reasoning steps to show researchers the evolution of models while maintaining their predictive accuracy (e.g., Senoner, Netland, & Feuerriegel, 2022). Further, while these explanations often remain limited to predictions for a local area of the data set, researchers are also developing robust methods for integrating them, and, thus, making them universally applicable for the full data set (Lundberg et al., 2020). To address learning curve challenges related to the use of ML methods, scientists have developed “AutoML” systems, which support the choice and evaluation of learning models (He, Zhao, & Chu, 2021). Such systems automatically tune hyperparameters, thereby removing many otherwise complex manual tasks that demand significant research experience. While manually working through iterations in building ML models may be the best way to learn about the opportunities and threats in model selection, AutoML is promising for expanding researchers’ methods portfolio.ML STRATEGIES TO BUILD AND TEST THEORY IN AMJConsidering that the mission of AMJ is to publish empirical research that tests, extends, or builds management theory, strategies to apply ML in management research should consider the state of theorizing about an empirical phenomenon. Within the landscape of management and organizational research, there are four primary ML research strategies that can be distinguished based on the continuity of the phenomenon investigated and its theoretical coherence: (1) predictive selection, (2) predictive refinement, (3) formative discovery, and (4) reductive discovery. Such distinctions are important given that ML predictions can only be built on relatively stable data and should be used for the purposes of advancing our scholarly understanding of a phenomenon. We discuss these four research strategies in greater detail below (see also Table 1).TABLE 1 Machine Learning Strategies for Theoretical Contribution in the Landscape of Management and Organizational ResearchPredictive SelectionManagement research thrives on a proliferation of theories, methods, and data, and researchers often formulate competing features and hypotheses to explain a phenomenon. In predictive selection, data about a phenomenon are continuous and therefore offer an opportunity to use ML to predict outcomes. At the same time, there is theoretical fragmentation in terms of the availability of many (and maybe competing) priors, such as candidate theories and variables, for explaining the phenomenon. Thus, ML can be used to examine the predictive strength of these priors (features) relative to a dependent variable (target). For example, He, Puranam, Shrestha, and von Krogh (2020) studied how open-source software projects resolve collective disputes around the choice of intellectual property rights. Framing their study in a “governance of the commons” perspective (Ostrom, 1990), the authors argued there is limited theoretical explanation of how communities resolve disputes over principles to license their open-source software. After manually identifying 11 priors from existing theories and observations on structural and processual features of governance disputes, they exploited these priors to code the entire sample of disputes (183 projects). Then, after building an ensemble of ML algorithms to detect robust association between those independent variables and dispute resolution outcomes (resolved or non-resolved), four algorithms were run and evaluated to achieve the best fit. The model demonstrated that the size of the group involved in a dispute, active efforts to add information in discussion, application of a choice procedure (e.g., voting), and type of issue under dispute (e.g., changing an existing or selecting a new license) predicted the resolution outcome. Based on these insights, the authors created a subsample of 61 cases that guaranteed minimum and maximum variance in observations to “manually” build a theory of governance dispute resolution. Thus, this study demonstrates how ML can be useful for inductively building or elaborating theory.Predictive RefinementML methods can aid researchers further when the stability of the phenomenon enables prediction by ML. Predictive refinement is a viable strategy when alternative theories about mechanisms explaining the phenomenon have become gradually integrated, and the variables and measures are relatively well established. For example, Rathje and Katila (2021) examined firms’ investment decisions and the impact on the enabling nature of technologies using a data set comprising the full set of patents awarded to U.S. firms between 1982 and 2022 (almost two million patents). Predicting antecedents of relationship (e.g., grants) and the effects of partner type (e.g., science agencies), the authors uncovered differences across firms inventing such technologies alone versus with public organization partners, with a subset of 33,130 patents resulting from private–public R&D relationships. Rathje and Katila (2021) conducted a quasi-experimental design drawing upon supervised ML to build and investigate the treatment versus control groups. ML was used to perform propensity score matching, which helped to identify balanced subsamples of patents with respect to observed covariates. This learning model avoided intractability associated with conventional propensity score-matching methods and enabled the authors to consider a much larger set of potentially confounding dimensions. In particular, regularization and cross-validation made it possible to circumvent the issue of overfitting of the model to the data, and the ML training sample, optimization techniques, and holdout sample were used to make the most accurate predictions. Thus, this approach illustrates how ML can be used to fine-tune causal inference, perform sensitivity analyses, and refine measures with an ultimate goal of refining theory.Formative DiscoveryA researcher may be confronted with a discontinuous phenomenon wherein the data set does not allow for accurate predictions through ML. Additionally, theory about the phenomenon may be fragmented. Under these conditions, ML can support formative discovery identifying patterns in the data set useful for gaining insights into the phenomenon. Specifically, formative discovery is a strategy that can help scholars build unprecedented and often pre-theoretical understanding of a phenomenon by revealing patterns that may not be obvious to even the best-trained observer. For example, drawing on a rapidly emerging and foundational literature on the outcomes of diversity in organizations, Wang, Dinh, Jones, Upadhyay, and Yang (2023) studied alignment between firms’ diversity statements and their employees’ online rating of those firms’ diversity, equality, and inclusion (DEI) efforts. As several firms publicly condemned racism and affirmed their stance on DEI following the rise of the Black Lives Matter social movement and the death of many Black Americans during the spring of 2020, the authors drew attention to a limited scholarly understanding of the contents of such statements and their impact on organizational outcomes. To explore the statement–outcome link, the authors first applied structural topic modeling (STM), an unsupervised ML technique, to data from open letters to stakeholders across the Fortune 1000 during late May to early June 2020 to identify themes (patterns) in DEI statements. After training, the learning models identified general DEI themes related to supporting and acknowledging the Black community and committing to diversifying the workforce, which were used in a second study targeting millions of data points of employees’ DEI ratings of firms on Glassdoor.com. Through the performance of topic probability scores (based on STM) for each firm’s statement, which represent the likelihood that a statement falls into a particular theme, the results showed that firms that released public statements and referenced identity-conscious topics received more favorable DEI ratings from their employees. Accordingly, this study is an exemplar of how ML can help to discover patterns in massive data sets, and thereby provide novel conceptual insights into poorly understood phenomena.Reductive DiscoveryResearchers are often challenged by a need to identify limitations to existing theory. Suitably, reductive discovery strategies are useful for identifying patterns in data that show such conceptual limits to generalizability. While limits to generalizability may be revealed through replication studies, there is additional value in using ML to examine emerging patterns in novel data sets on a discontinuous phenomenon. ML can reveal hitherto unknown associations across features, bringing into question the applicability of existing theoretical explanations, and offer up novel learning opportunities. For example, Belikov, Rzhetsky, and Evans (2022) developed a promising approach to mitigating the challenges of generalizability using Bayesian calculus to predict robust scientific claims based on data extracted from prior publications and weighted by institutional, social, and scientific factors. The authors applied ML to the case of gene regulatory interaction studies and identified a set of fundamental characteristics that predict replicability across contexts, which have been used to guide scholars’ choices in research topics and programs. While our field thrives on great diversity in categories, constructs, theories, methods and data compared with the life sciences, this diversity is an opportunity for our field, bearing in mind that ML improves task performance over time as the underlying models become exposed to increasingly diverse data.ML STRATEGIES TO BUILD AND TEST THEORY: AN EXAMPLEEngaging in rigorous ML research requires an understanding of the requisite methodological steps as well as the expertise and efforts needed at each step. Because such requirements are apparent in research that makes predictions with supervised learning, we offer an example that highlights relevant choices and outcomes involved in ML methods (Shrestha, He, Puranam, & von Krogh, 2020; Shrestha, Krishna, & von Krogh, 2021). Specifically, because an ML model (as a specified set of computations built as ML algorithms) works through different values in the data when solving a task, we discuss the three steps in building a learning model: (1) data management, (2) learning, and (3) evaluation.During data management, the researcher specifies the task to be completed and the relevant data sources, and proceeds to collect and analyze the data (identifying distributions, skewness in data, or class imbalance). She then divides the data into two sets—the “training data,” used to iteratively improve the model, and the “holdout data,” used to evaluate the predictive power of the model (80:20 or 70:30)—and applies techniques to deal with outlier treatment and missing values. The researcher also conducts “feature engineering,” which specifies features in the data used to build the model. Feature engineering is constructing, deriving, and transforming data into variables that can be used as input features for the model (Chapman, Gilmore, Chapman, Mehrubeoglu, & Mittelstet, 2020). For numerical data, this may imply normalizing the data by features into a 0:1 interval. Accordingly, a sound ML design relies on finding the right features on which to train the model and reporting on the details of this activity. It is useful to sense-test the features with the existing literature, since the purpose of ML is to help researchers build and test theory.During the learning stage, the researcher chooses a performance metric for the prediction task (e.g., log loss score). Once the correct performance metrics have been established, ML algorithms must be chosen to fit data type and data size, prediction accuracy, interpretability of the model, extent of the feature set, etc. Notably, there are many specialized ML algorithms from which to choose for classification (Y is a categorical variable) or regression (Y is a continuous variable) tasks. Some will be familiar to management scholars, such as linear and logistic regression, although many useful algorithms are less common and demand careful examination and explanation prior to use (e.g., tree models: decision tree, random forest, gradient-boosted trees). Having chosen an algorithm, the researcher next selects and optimizes the model by searching for hyperparameters (e.g., number of trees, tree depth, batch size, learning rate), which are intended to control the learning process. While some simple algorithms require no hyperparameters, many demand researchers to carefully select such parameters (or automatize such selection) to achieve a good learning performance of the model. This “tuning” is commonly done by applying standard search techniques, such as simple or greedy search (i.e., locally optimal choices at each stage of search) to search for the best fit. Thus, the selection of ML algorithms, as well as the inclusion and tuning of hyperparameters, is critically important in applying ML.During evaluation, the researcher examines how the chosen model performs predictions on the holdout data relative to the set performance metrics. This step involves the analysis of model errors and aims to give the researcher a sense of a model’s generalizability beyond the training data. In addition, reporting the outcome of the evaluation step is necessary for future research to advance on the same or similar data sets. An important aspect of evaluation is also to check usability and deficiency of the model in a real-world context.A CALL TO ACTIONWith a mission of publishing empirical research that significantly advances management theory and contributes to management practice, there are several considerations for authors targeting AMJ as an outlet for their significant value-added contributions to the field’s understanding of an issue or topic via AI and ML. While the Journal has augmented its reviewer capacity and methodological expertise to handle manuscripts with ML applications, there are a few deliberations that will improve the likelihood that a submission will be favorably received. Beyond being clear about why they use ML to discover patterns and build and test theory, researchers should focus on data engagement, data treatment, and data management.Submissions to AMJ that draw on ML methods should demonstrate deep understanding of and engagement with the data. Researchers should take steps to comprehensively familiarize themselves with the data at hand and to ask what can or should be learnt from this data. In so doing, they should pay particular attention to feature engineering—specifying what features can be extracted from the raw data relevant for predicting the target.Authors drawing on AI and ML methods should also consider parameter choices and reflect on how bias may influence those choices and interpretation of their findings. For example, when choosing a model, it is imperative that researchers understand the goals they are trying to achieve. Similarly, researchers should make sure their training data sets are the best representative of the whole population. It is also important that we devote space (in text or in online appendices) to explain such choices and other aspects of feature engineering in detail. It is also important for researchers to share their code (at least during the review process), and we generally encourage publishing code on publicly accessible repositories.Finally, authors of ML-based studies must take particular care in the management of their data. In addition to following ethical standards set by the Academy of Management, scholars who apply these methods to private and publicly available individual data must safeguard data security and privacy. For example, authors should anonymize data and develop a comprehensive data management plan to protect data and avoid any breach of privacy.CONCLUSIONAI should not ossify a creative mind but instead embolden and inspire researchers to engage with novel and unconventional data in their area of interest. The mindset driving this engagement should be: “How can these tools help me uncover what I always wanted to know, but never had the power to study?” Once clarity exists on that question, we may approach our research questions and phenomena with a portfolio of research strategies, including formative and reductive discovery, predictive selection, and predictive refinement. In doing so, we have both the opportunity and the ability to transform management research and practice.1 While classical statistics focuses on extracting inferences for a population from a sample, ML aims at finding generalizable predictive patterns (Bzdok, Altman, & Krzywinski, 2018). This essential feature may support researchers in scrutinizing effect sizes in very large samples (see Combs, 2010).AcknowledgmentsWe are very grateful for comments on earlier versions of this editorial from Savindu Herath, Riitta Katila, Phanish Puranam, and Yash Raj Shrestha. Any errors are our own.REFERENCESBailey, D. E., Faraj, S., Hinds, P. J., Leonardi, P. M., & von Krogh, G. 2022. We are all theorists of technology now: A relational perspective on technology and organizing. Organization Science, 33: 1–18. Google ScholarBelikov, A. V., Rzhetsky, A., & Evans, J. 2022. Prediction of robust scientific facts from literature. Nature Machine Intelligence, 4: 445–454. Google ScholarBzdok, D., Altman, N., & Krzywinski, M. 2018. Statistics versus machine learning. Nature Methods, 15: 233–234. Google ScholarChapman, K. W., Gilmore, T. E., Chapman, C. D., Mehrubeoglu, M., & Mittelstet, A. R. 2020. Camera-based water stage and discharge prediction with machine learning. Hydrology and Earth System Sciences Discussion. Forthcoming. Google ScholarChoudhury, P., Allen, R. T., & Endres, M. H. 2020. Machine learning for pattern discovery in management research. Strategic Management Journal, 42: 30–57. Google ScholarClarke, A. C. 2000. 2001: A space odyssey. New York, NY: ROC. (Original work published 1968) Google ScholarCombs, J. G. 2010. Big samples and small effects: Let’s not trade rigor and relevance for power. Academy of Management Journal, 53: 9–18.Link , Google ScholarFeuerriegel, S., Shrestha, Y. R., von Krogh, G., & Zhang, C. 2022. Bringing artificial intelligence to business management. Nature Machine Intelligence, 4: 611–613. Google ScholarGeorge, G., Haas, M., & Pentland, A. 2014. Big data and management. Academy of Management Journal, 57: 321–326.Link , Google ScholarGeorge, G., Osinga, E. C., Lavie, D., & Scott, B. A. 2016. From the editors: Big data and data science methods for management research. Academy of Management Journal, 59: 1493–1507.Link , Google ScholarHannah, D. P., Tidhar, R., & Eisenhardt, K. M. 2021. Analytic models in strategy, organizations, and management research: A guide for consumers. Strategic Management Journal, 42: 329–360. Google ScholarHe, V. F., Puranam, P., Shrestha, Y. R., & von Krogh, G. 2020. Resolving governance disputes in communities: A study of software license decisions. Strategic Management Journal, 41: 1837–1868. Google ScholarHe, X., Zhao, K., & Chu, X. 2021. AutoML: A survey of the state-of-the art. Knowledge-Based Systems, 212: 106622. Google ScholarIansiti, M., & Lakhani, K. R. 2020. Competing in the age of AI: Strategy and leadership when algorithms run the world. Boston, MA: Harvard Business Review Press. Google ScholarKrakowski, S., Luger, J., & Raisch, S. 2022. Artificial intelligence and the changing sources of competitive advantage. Strategic Management Journal. Forthcoming. Google ScholarLundberg, S. M., Erion, G., Chen, H., DeGrave, A., Prutkin, J. M., Nair, B., Katz, R., Himmelfarb, J., Bansal, N., & Lee, S.-I. 2020. From local explanations to global understanding with explainable AI for trees. Nature Machine Intelligence, 2: 56–67. Google ScholarNilsson, N. J. 1998. Artificial intelligence: A new synthesis. San Francisco, CA: Morgan Kaufmann. Google ScholarNiu, S., Liu, Y., Wang, J., & Song, H. 2020. A decade of survey of transfer learning (2010–2020). IEEE Transactions on Artificial Intelligence, 1: 151–166. Google ScholarOstrom, E. 1990. Governing the commons: The evolution of institutions for collective action. Cambridge, U.K.: Cambridge University Press. Google ScholarRathje, J. M., & Katila, R. 2021. Enabling technologies and the role of private firms: A machine learning matching analysis. Strategy Science, 6: 1–109. Google ScholarSenoner, K., Netland, T., & Feuerriegel, S. 2022. Using explainable artificial intelligence to process quality: Evidence from semiconductor manufacturing. Management Science, 68: 5704–5723. Google ScholarShrestha, Y. R., He, V. F., Puranam, P., & von Krogh, G. 2020. Algorithm supported induction for building theory: How can we use prediction models to theorize? Organization Science, 32: 856–880. Google ScholarShrestha, Y. R., Krishna, V., & von Krogh, G. 2021. Augmenting organizational decision-making with deep learning algorithms: Principles, promises, and challenges. Journal of Business Research, 123: 588–603. Google ScholarTang, P. M., Koopman, J., McClean, S. T., Zhang, J. H., Li, C. H., De Cremer, D., Lu, Y., & Ng, C. T. S. 2022. When conscientious employees meet intelligent machines: An integrative approach inspired by complementarity theory and role theory. Academy of Management Journal, 65: 1019–1054. Google ScholarTong, S., Jia, N., Luo, X., & Fang, Z. 2021. The Janus face of artificial intelligence feedback: Deployment versus disclosure effects in employee performance. Strategic Management Journal, 42: 1600–1631. Google ScholarVanneste, B., & Gulati, R. 2022. Generalized trust, external sourcing, and firm performance in economic downturns. Organization Science, 33: 1599–1619. Google ScholarWang, W., Dinh, J. V., Jones, K. S., Upadhyay, S., & Yang, J. 2023. Corporate diversity statements and employees’ online DEI ratings: An unsupervised machine-learning text-mining analysis. Journal of Business and Psychology, 38: 45–61. Google ScholarZhang, C., Xie, Y., Bai, H., Yu, B., Li, W., & Gao, Y. 2021. A survey of federated learning. Knowledge-Based Systems. Google ScholarFiguresReferencesRelatedDetailsCited byPositioning Research on Novel Phenomena: The Winding Road From Periphery to CoreJohn C. Dencker, Marc Gruber, Toyah Miller, Elizabeth D. Rouse and Georg von Krogh17 October 2023 | Academy of Management Journal, Vol. 66, No. 5Publishing Multimethod Research in AMJ: A Review and Best-Practice RecommendationsNed Wellman, Christian Tröster, Matthew Grimes, Quinetta Roberson, Floor Rink and Marc Gruber15 August 2023 | Academy of Management Journal, Vol. 66, No. 4Opening Up AMJ’s Research Methods RepertoireAnn Langley, Emma Bell, Paul Bliese, Curtis LeBaron and Marc Gruber20 June 2023 | Academy of Management Journal, Vol. 66, No. 3 Vol. 66, No. 2 Permissions Metrics in the past 12 months History Published online 19 April 2023 Published in print 1 April 2023 Information© Academy of Management JournalAcknowledgmentsWe are very grateful for comments on earlier versions of this editorial from Savindu Herath, Riitta Katila, Phanish Puranam, and Yash Raj Shrestha. Any errors are our own.Download PDF
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
Likx完成签到,获得积分10
刚刚
Tina完成签到 ,获得积分10
刚刚
1111颂发布了新的文献求助10
1秒前
2秒前
正经大善人完成签到,获得积分10
2秒前
开心元霜发布了新的文献求助10
2秒前
不然你搬去火星啊完成签到 ,获得积分10
3秒前
3秒前
王77应助科研通管家采纳,获得10
4秒前
Jun应助科研通管家采纳,获得20
4秒前
4秒前
脑洞疼应助科研通管家采纳,获得10
5秒前
打打应助科研通管家采纳,获得10
5秒前
5秒前
AbA发布了新的文献求助10
5秒前
上好发布了新的文献求助10
5秒前
5秒前
黄亚洲完成签到,获得积分10
6秒前
zitang完成签到,获得积分10
6秒前
kun完成签到,获得积分10
6秒前
嘻嘻Y完成签到,获得积分10
7秒前
7秒前
10秒前
坚强元枫完成签到,获得积分10
11秒前
冉柒发布了新的文献求助10
11秒前
王硕硕发布了新的文献求助10
12秒前
凡帝发布了新的文献求助10
12秒前
15秒前
Jasper应助途中采纳,获得10
15秒前
Aries完成签到 ,获得积分10
17秒前
19秒前
Hello应助梁家小卖部采纳,获得10
20秒前
apathy发布了新的文献求助10
21秒前
赘婿应助贝涛采纳,获得10
22秒前
Alisa发布了新的文献求助10
24秒前
李莉莉发布了新的文献求助10
25秒前
李沛书完成签到,获得积分10
26秒前
26秒前
无心的不可关注了科研通微信公众号
27秒前
牧绯完成签到,获得积分10
28秒前
高分求助中
Sustainability in Tides Chemistry 2000
Bayesian Models of Cognition:Reverse Engineering the Mind 800
Essentials of thematic analysis 700
A Dissection Guide & Atlas to the Rabbit 600
Very-high-order BVD Schemes Using β-variable THINC Method 568
A Photographic Guide to Mantis of China 常见螳螂野外识别手册 500
Natural History of Mantodea 螳螂的自然史 500
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3124684
求助须知:如何正确求助?哪些是违规求助? 2775048
关于积分的说明 7725009
捐赠科研通 2430539
什么是DOI,文献DOI怎么找? 1291201
科研通“疑难数据库(出版商)”最低求助积分说明 622091
版权声明 600323