摘要
Shuang Ren, Riikka M. Sarala, Paul Hibbert The advent of generative artificial intelligence (GAI) has sparked both enthusiasm and anxiety as different stakeholders grapple with the potential to reshape the business and management landscape. This dynamic discourse extends beyond GAI itself to encompass closely related innovations that have existed for some time, for example, machine learning, thereby creating a collective anticipation of opportunities and dilemmas surrounding the transformative or disruptive capacities of these emerging technologies. Recently, ChatGPT's ability to access information from the web in real time marks a significant advancement with profound implications for businesses. This feature is argued to enhance the model's capacity to provide up-to-date, contextually relevant information, enabling more dynamic customer interactions. For businesses, this could mean improvements in areas like market analysis, trend tracking, customer service and real-time data-driven problem-solving. However, this also raises concerns about the accuracy and reliability of the information sourced, given the dynamic and sometimes unverified nature of web content. Additionally, real-time web access might complicate data privacy and security, as the boundaries of GAI interactions extend into the vast and diverse Internet landscape. These factors necessitate a careful and responsible approach to evaluating and using advanced GAI capabilities in business and management contexts. GAI is attracting much interest both in the academic and business practitioner literature. A quick search in Google Scholar, using the search terms 'generative artificial intelligence' and 'business' or 'management', yields approximately 1740 results. Within this extensive repository, scholars delve into diverse facets, exploring GAI's potential applications across various business and management functions, contemplating its implications for management educators and scrutinizing specific technological applications. Learned societies such as the British Academy of Management have also joined forces in leading the discussion on AI and digitalization in business and management academe. Meanwhile, practitioners and consultants alike (e.g. McKinsey & Company, PWC, World Economic Forum) have produced dedicated discussions, reports and forums to offer insights into the multifaceted impacts and considerations surrounding the integration of GAI in contemporary business and management practices. Table 1 illustrates some current applications of GAI as documented in the practitioner literature. Zalando [online platform for fashion and lifestyle] Instacart [e-commerce application] Salesforce [cloud-based customer relationship software provider] DHL [logistics provider] Coca-Cola [beverage company] Nestlé and Mondelez [confectionary] Heinz [food processing company] Air India [airline] Duolingo [language learning application] Mastercard [financial services] In an attempt to capture the new opportunities and challenges brought about by this technology and to hopefully find a way forward to guide research and practice, management journals have been swift to embrace the trend, introducing special issues on GAI. These issues aim to promote intellectual debate, for instance in relation to specific business disciplines (e.g. Benbya, Pachidi and Jarvenpaa, 2021) or organizational possibilities and pitfalls (Chalmers et al., 2023). However, amidst these commendable efforts that reflect a broad spectrum of perspectives, a critical examination of the burgeoning hype around GAI reveals a significant gap. Despite the proliferation of discussions from scholars, practitioners and the general public, the prevailing discourse is often speculative, lacking a robust theoretical foundation. This deficiency points to the challenges to existing theories in terms of their efficacy in explaining the unique demands created by GAI and indicates an urgent need for refining prior theories or even redeveloping new theories. There is a pressing need to move beyond the current wave of hype and explore the theoretical underpinnings of GAI and the dynamics of its potential impact, to ensure a more nuanced and informed discussion that can guide future research and application in this rapidly evolving area. In this direction, the British Journal of Management (BJM) invited prominent scholars who serve as editors in leading business and management journals to weigh in and contribute with their diverse theoretical knowledge to this symposium paper on the emerging GAI phenomenon. This collaborative effort aims to advance the theorization of business and management research in relation to the intricacies associated with the impact of GAI by engaging in intensive discussions on how theoretical attempts can be made to make sense of the myths and truths around GAI. The quest for theory, either seeking or refining, is a long-standing tradition in business and management research (e.g. Colquitt and Zapata-Phelan, 2007). While the seven pieces below place different elements under the spotlight of theoretical scrutiny, one common thread is the need to reconceptualize the relational realm of workplaces. The introduction of GAI in the workplace refines the norm of working together as a person-to-person group to working in a human–GAI group, with the latter illustrating three novel conceptual contributions in comparison to traditional understandings of the dynamics in the workplace. In the realm of the GAI-laden workplace, it is imperative to shift our perspective from a deterministic outlook to one that manages emergence. Quattrone, Zilber and Meyer encapsulate the emergent nature of GAI-related phenomena by pointing out that 'the future is not out there'. Rather than attempting to predict the future, they advocate the making of the future through creativity and reflection. Equally, they posit that GAI should be viewed as a construction whose functions and effects are not predetermined but shaped by people's decisions and utilization. The etymological lens they bring encourages a rethinking of the impacts of GAI ontologically. The recognition of our inability to know what the future is points to a relational approach to creating it, centred upon relations between the current and future generations in specific ways, and between people within generations, objects and locations more broadly. Relationality thus establishes the context for sense-making, where the workings and outcomes unfold as emergent phenomena, in the GAI-laden workplace. MacKenzie, Decker and Lubinski illuminate the importance of contextual understanding when examining the impact of GAI on the workplace, advocating an approach where 'context matters'. The context they propose is an expansive concept that can encompass the analysis of past imaginaries of existing technologies, an examination of technologies currently in question along with other technologies, as well as the incorporation of institutional forces over time (e.g. economic, political systems). In essence, the call to recognize that 'context matters' should serve as a guiding principle to move beyond idiosyncratic, isolated examinations of GAI and place it within the intricacies of contextual relationships that contribute to its emergence and future development. Brown, Ellis and Gore ask a critical question of how we should redefine 'team' if GAI integrates into our daily work. As the conventional definitions of team comprise individuals, the extent to which AI can be considered as a team member becomes pivotal. As much as GAI technologies may seem human-like (including robotic 'human'), GAI does not yet possess feelings, desires, intentions and responsibility in the same way as human beings. In the context of a human–GAI team, Davison and Ravishankar provide their first-hand experience of using GAI in their research, specifically for tasks such as literature review, transcribing and analysing data. They caution against the mere reliance on GAI in generating original research. Nonetheless, they conclude by highlighting the potential of leveraging effective 'prompts' to maximize the capabilities of GAI, leaving readers with valuable food for thought. Munzio and Faulconbridge take the concept of human–GAI relationships forward by focusing on the producer–consumer relationships that shape professionalism. They highlight a range of new research questions in which the human–GAI group will challenge established constructs both theoretically and empirically. In addition, Islam and Greenwood contribute to debates about the nature (or absence) of responsibility in the use of GAI as human–GAI interactions unfold. They take a relational perspective to knowledge production in which the use of GAI-based large language models challenges the production of knowledge and the nature of accountability. These issues are perhaps more profound as the interactions between humans and GAI can be either coordinated or uncoordinated. In sum, BJM is committed to fostering a deeper understanding and stimulating debate around GAI and its profound impact on business and management studies. The diverse contributions in this symposium collection do not seek to offer definitive solutions; instead, they serve as an invaluable starting point on a journey of exploration and discovery in the field. The insights offered here extend beyond the conventional boundaries, challenging and enriching existing management theories with fresh perspectives stimulated by the phenomenon of GAI. These discussions are pivotal in developing, extending, adapting and evolving theoretical frameworks to remain relevant in a business landscape that could become GAI-driven. The discussions also extend to the ethical and societal considerations of GAI in management, emphasizing responsible and sustainable business and management practices. By bridging theory and practice, BJM aims to provide managers and practitioners with insights and tools to navigate the complexities of integrating GAI into their strategies and operations, where appropriate, in a sustainable and responsible manner. In essence, with this symposium, BJM aims to contribute to a collective body of knowledge that not only seeks to understand and explain GAI but also to shape the future of GAI in work, employment, business, governance and society towards sustainable and responsible directions. Paolo Quattrone, Tammar Zilber, Renate Meyer The etymology of words is often a source of insights to not only make sense of their meaning, but also speculate and imagine meanings that are not so obvious and thereby see the phenomena signalled by these words in new and surprising ways. The etymology of 'artificial' and 'intelligence' does not disappoint. 'Artificial' comes from 'art' and -fex 'maker', from facere 'to do, make'. 'Intelligence' comes from inter 'between' and legere 'choose, pick out, read' but also 'collect, gather'. There is enough in these etymologies to offer a few speculations and imagine the contours of generative artificial intelligence (GAI) and its possible futures. The first of these is inspired by the craft of making and relates to the very function and use of AI. Most of the current fascinations with AI emphasize the predictive capacity of the various tools increasingly available and at easy disposal. Indeed, marketers know well in advance when we will need the next toothbrush, fuel our cars, buy new clothes, and so forth. The list is long. This feature of AI enchants us when, for instance, one thinks of a product and, invariably, an advertisement related to that product appears on our social media page. This quasi-magical predictive ability captures collective imaginations and draws upon very well-ingrained forms of knowledge production which presuppose that data techniques are there to represent the world, paradoxically, even when it is not there, as is the case with predictions. The issue is that the future is not out there; we do not know what future generations want from us and still, we are increasingly called to respond to their demands. Despite the availability of huge amounts of data points and intelligence, the future, even if proximal and mundane – as our examples above, always holds surprises. This means that AI may be useful not to predict the future, but to actually imagine and make it, as the -fex in 'artificial' reveals. This is the art in the 'artificial' and points to the possibility of conceiving AI as a compositional art, which helps us to create images of the future, sparks imagination and creativity and, hopefully, offers a space for speculation and reflection. The word intelligence is our second cue, which stresses how 'inter' means to be and explore what is 'in between'. As entrepreneurs are in between different ventures and explore what is not yet there (Hjorth and Holt, 2022), AI may be useful to probe grey areas between statuses and courses of action. It can be used to create scenarios, to make sure that the very same set of data produces alternative options that leave space for juggling among different decision-making criteria without reducing decisions about complex states of affairs to single criteria, most likely, value rather than values. This is how, for instance, one could wisely refrain from both apocalyptic and salvific scenarios that characterize the debate about AI. On the one hand, AI is seen as one of the worst possible menaces to humankind. It will take control of our minds and direct our habits, making us entirely dependent. Very likely, as the Luddites were proven wrong (but not completely) when looking at the first and second Industrial Revolutions, the pessimist views will prove wrong, but not completely, as it is clear that AI has agency (Latour, 1987) in informing our judgement and it does so through various forms of multimodal affects, that is, relying on our vast repertoire of senses, all mobilized by new forms of technology (e.g. think of smartwatches and how they influence our training habits). On the other hand, AI – similar to the first enterprise resource planning (ERP) systems – is seen as a panacea for many of our problems, diseases and grand challenges, from poverty to climate change, at least until one realizes that SAP does not stand for 'Solves All Problems' (Quattrone and Hopper, 2006). These dystopian and utopian attitudes will soon be debunked and leave room for more balanced views, which will acknowledge that AI is both a means to address wicked problems and a wicked problem itself, and, again, realize that wisdom is always to be found in the middle, the very same middle in between views. In this case, a more balanced in-between view is to realize that AI itself is a construction. Like all resources (Feldman and Worline, 2006) and technologies (Orlikowski, 2000), their function and effect are not pre-given but will be determined by our use thereof. For example, AI will be productive of 'facts' but of those that are reminiscent of the fact that facts are 'made', and that there is nothing less factual than a fact for, as the Romans knew so well (from factum, i.e. made), a fact is always constructed, and AI will be making them in huge quantities. This will be good to speculate, to foster imagination by having a huge amount of them available, but also potentially bad, as those who will own the ability to establish them as facts will magnify Foucault's adage that knowledge is power. The third cue stands in the root leg-, which originates so many words that characterize our contemporary world, both academic and not, including legere (to read, but also to pick and choose), legare (to knot) and indeed a religion. As much as medieval classifying techniques used inventories of data to invent new solutions to old problems by recombining such data in novel forms, by choosing and picking data depending on the purpose of the calculation, to imagine the future and reimagine the past (Carruthers, 1998), AI will use even bigger inventories of data to generate inventions until we finally realize that to explore 'what is not' and could become is much more fruitful in imagining the future and the unprecedented than to define 'what is' (Quattrone, 2017). Only then will AI be truly generative. As was the case with Steve Ballmer, then CEO of Microsoft, when presented with the first iPhone. He exclaimed 'who would want to pay five hundred dollars for a phone?'. He had not realized that to comprehend the power and complexities of technologies, it is better to think in terms of what they are not, rather than what they are. The cell phone is not a phone so much as it is a camera, a TV or cinema, a newspaper, a journal/calendar. Google begins a search with X, a negative, and then by creating correlations defines what Z could be (a phone may be a cinema) and what it could become (a meeting place). This move from the negative to the potential, from what is not to what can be, is the core of AI. AI can facilitate this exploration into what is not obvious and help us avoid taking things for granted. So, predicting how AI will develop and affect our lives is bound to fail as there are so many ways this can go and many unintended consequences. At this stage, it may be more fruitful not to predict the future but to explore how we try to make sense of the unknowable future in the present and which potential pathways we thereby open and which we close. Exploring the framing contests around AI, the actors involved and the various interests they attempt to serve may tell us more about ourselves than about AI – about our collective fantasies, fears and hopes that shape our present and future. This brings us to whether and to what extent AI can inform human thinking and actions. That technologies influence our behaviour is now taken for granted, but given that this influence is not deterministic, and technologies have affordances that go beyond the intentions of the designers, what counts as agency and where to find it is possibly a black box that GAI can contribute to reopen. Since the invention of the printing press, and the debate between Roland Barthes and Michael Foucault, the notion of authorship has been questioned (Barthes, 1994; Foucault, 1980), along with authors' authority and accountability. This is even truer now, when algorithms of various kinds already take decisions seemingly autonomously, from high-frequency trading in finance to digital twins in construction, and now also being able to write meaningful sentences that potentially disrupt not only research but also the outlets where these texts are typically published, that is, academic journals (Conroy, 2023). We are moving from a non-human 'decision-maker', be it a self-driving car or a rover autonomously exploring Mars, to non-human 'makers' tout court, with the difference that they have no responsibility and no accountability. And yet they influence the world and affect our personal, social and work lives. This has policy and theoretical implications. In policy terms, as much as the legal form of the corporation emerged to limit and regulate individual greed (Meyer, Leixnering and Veldman, 2022), we may witness the emergence of a new fictitious persona, this time even more virtual than the corporation, with no factories and employees, while still producing and distributing value through, and to, them, respectively. Designing anticipatory governance is even more intricate than with corporations, as these non-human 'makers' are even more dispersed and ephemeral, not to say slippery. Theoretically, we may be at the edge of a revolution as important as the emergence of organization theory in the twentieth century. It was Herbert Simon (1969) who foresaw the need for a science of the artificial, that is, a science the object of which was the organization of the production of artefacts of various kinds, of the need for making sense of the relationship between means and ends when new forms of bounded rationality informed decision-making. We would not be surprised if a 'New Science of the Artificial', this time related to the study of AI rationality, emerged in the twenty-first century. For sure, there will be a need to govern AI and study how the governance and organization of AI intertwine with human rationality, possibly changing the contours of both. Niall G. MacKenzie, Stephanie Decker, Christina Lubinski Recently, generative artificial intelligence (GAI) has been subject to breathless treatments by academics and commentators alike, with claims of impending ubiquity (or doom, depending on your perspective) and life as we know it being upended, with millions of jobs destroyed (Eglash et al., 2020). Historians will, of course, point out that this is nothing new. Technological innovation and adoption have a long and generally well-researched history (Chandler, 2006; Scranton, 2018) and the same is true for resistance to these innovations (Juma, 2016; Mokyr, 1990; Thompson, 1963) and moral panics (Orben, 2020). What, if anything, does history have to tell us about GAI from a theoretical perspective other than 'it's not new…'? Good historical practice requires a dialogue between past and present (Wadhwani and Decker, 2017). Thus, if we want to understand GAI we should understand the character of its development and the context in which it occurred and occurs. GAI's history was/is underpinned by progression in several other areas including mathematics, information technology and telecommunications, warfare, mining and computing science (amongst many more) (Buchanan, 2006; Chalmers, MacKenzie, and Carter, 2021; Haenlein and Kaplan, 2019). This means that despite GAI's rapid recent progress, it is still the result of iterative developments across various other sectors which enable(d) and facilitate(d) it. Consistent within this is the imagined futures (Beckert, 2016) pushed by technologists, entrepreneurs, policymakers and futurists about what it could mean for society. The value of historical thinking with regard to new technologies like GAI can be illustrated by considering the social imaginaries (Taylor, 2004) that have been generated as part of the experience of previous technologies and their development and adoption. When a technology emerges, there may be a fanfare about how it will change our lives for the better, and/or concerns about how it will disrupt settled societal arrangements (Budhwar, 2023). Ubiquity-posited technologies like GAI are then often subject to competing claims – promises of imagined new futures where existing ways of doing things are improved, better alternatives averred and economic and societal benefits promised, but are also often accompanied by challenges and concerns regarding job destruction, societal upheaval and the threat of machines taking over. As a consequence, the imaginaries compete with each other and are generative in and of themselves in that they create spaces of possibility that frame experiments of adoption (Wadhwani and Viebig, 2021). We can analyse past imaginaries of existing technologies to better understand what the emergence of new technologies and the auguries posited with them tell us about how societies adopt and adapt to the changes they bring. However, it is only in a post-hoc fashion that we can understand the efficacy of such claims. For example, recent work by business historians has considered how we understand posited past futures of entrepreneurs across a range of technological and non-technological transformations (Lubinski et al., 2023), illustrating the value that historical work brings to theorizing societal change brought about by such actions. The imaginaries, good and bad, associated with technologies like GAI play an important role in their legitimation and adoption, as well as their opposition. Given the contested nature of such societally important technologies, it is therefore important to also recognize and consider the context in which new technologies such as GAI emerge in terms of the promises associated with them, the societal effect they have and how they unfold in order to provide appropriate theories and conceptual lenses to better understand them. When exploring the integration of new technologies in context, historical analysis of both the technology in question and other technologies illustrates nuances and insights to inform deeper theory to understand what a technology like GAI can mean to society. The different imaginaries associated with GAI possess clear parallels with what has come in the past. The Luddite riots of the nineteenth century, whereby agricultural workers sought to destroy machinery that was replacing their labour (Mokyr, 1990; Thompson, 1963), are probably the most famous negative societal response to the introduction of new technology, giving rise to the term 'Luddite' that is still commonly used today to describe someone opposed to technology. Contrastingly, the playwright Oscar Wilde posited in his 1900 essay 'The soul of man under socialism' that 'All unintellectual labour, all monotonous, dull labour, all labour that deals with dreadful things, and involves unpleasant conditions, must be done by machinery' (Wilde, 1891/2007). More recently, Lawrence Katz, a labour economist at Harvard, repeated Wilde's suggestion by predicting that 'information technology and robots will eliminate traditional jobs and make possible a new artisanal economy' (Thompson, 2015). Both Wilde's and Katz's comments tilt at the imaginary of the benefits that technology and automation can bring in freeing up people's time to focus on more creative and rewarding work and pursuits, whilst the Luddites were expressing serious misgivings about the imaginary that their jobs, livelihoods and way of life were under serious threat from mechanization. Good and bad imaginaries are a necessary part of the development of all new technologies but are only really understood post hoc and within context. As Mary O'Sullivan recently pointed out, based on her analysis of the emergence of steam engine use in Cornish copper mines in the eighteenth century, technology itself does not bring the general societal rewards suggested if the economic system in which it is developed remains controlled by small groups of powerful individuals (O'Sullivan, 2023). Similar concerns have been made about GAI with its principal proponents comprising a few global multinationals, as well as state-controlled interests such as the military, racing for dominance in the technology (Piper, 2023). The economic and political systems in which GAI is being developed are important to understand in relation to the imaginaries and promises being made concerning its value and warnings of its threats, particularly in light of the history of societally important technological shifts. As scholars, we face ongoing challenges to explain new, ubiquity-focused technologies and the accompanying imaginaries (which often constitute noise, albeit with kernels of truth/accuracy hidden therein). In this sense, when we seek to theorize about GAI and its potential impact on business and management (and vice versa), it is important to recognize that historical analysis does not foretell the future, but rather provides a critical understanding of how new innovations impact and are impacted by the societies they take place in. Interrogating the contested imaginaries through the incorporation of historical thinking in our conceptualization of new technologies such as GAI will provide a deeper understanding of their impact, which in turn will allow us to better harness them for the greater good. Olivia Brown, David A. Ellis, Julie Gore Digital technologies continue to permeate across society, not least the way in which they allow individuals and teams to collaborate (Barley, Bechky and Milikhen, 2017). For instance, innovations in communication have led to a shift towards virtual working and the proliferation of globally distributed corporate teams (see Gilson et al., 2015). As the volume and variety of data types that can be linked together has also accelerated, we have witnessed the emergence of large language models (LLMs), with the introduction of ChatGPT bringing them to the attention of a much wider audience. Broadly referred to as a form of generative artificial intelligence (GAI), ChatGPT allows individuals (or teams) to ask questions and quickly be provided with detailed, actionable, conversational responses. Sometimes referred to as virtual agents as part of customer service and information retrieval systems, these conversational responses can effectively become virtual team members. The view of technology as a means with which to facilitate effective teamwork in organizations has now shifted towards questions of whether, and under what circumstances, we can consider this GAI as a 'team member' (Malone, 2018). Conceptualizing GAI in this manner suggests a trend away from viewing technology as a supportive tool that is adjunct to human decision-making (see Robert, 2019 for a discussion of this in healthcare) to, instead, having a direct and intrinsic role within the decision-making and task-execution processes in teams (O'Neill et al., 2022). New questions are therefore being raised as to whether AI team members improve the performance of a team, and would organizations trust them? And if so, how much? To what degree are AI team members merely adjunct to, or replacements for real team members when it comes to decision-making? When a hybrid AI team completes a task, who takes responsibility for successes and failures? How can or should managers or leaders quantify accountability? Addressing these early questions dictates that it may soon be necessary to reframe and readdress the way in which teams are studied from theoretical, practical and ethical perspectives. From a theoretical perspective, across the many definitions of teams that have been developed within th