摘要
Acknowledging the advantages and risks of Artificial Intelligence (AI) in health is necessary to make effective and timely AI-based digital solutions available to policymakers and financers so more lives can be saved, quality of care improved, and an exhausted and shrinking workforce relieved. Given the significant amount of funding, data integration and societal mobilisation AI calls for, political leadership is as needed as evidence-based policies. This need is even more critical in the health sector due to the economic, legal, and political implications of health-related issues and risks posed to governments, regulators and scientists by rapid, baseless decisions. Therefore, a too-restrictive regulatory environment or unbalanced scientific discourses of distrust and concern about AI (see the international call for the stop in AI research by AI researchers themselves) are likely to bias the willingness of relevant actors to think of expanding in healthcare AI-based solutions, notwithstanding several of these tools are being used for several years, and others are likely to be implemented shortly. Instead of speculative and catastrophic anticipations of AI, which does no more than push benefits away from health professionals and patients, the debate of its use in healthcare needs to elucidate how changes in policies, planning, financing, services, management, and behaviours can cope with key principles in public health such as security, quality, ethics, or equity. This so-called utilitarian standpoint of AI is the starting point of more balanced and well-rooted discussions about such complex artefacts. This is how academic research will likely contribute to more mature and nuanced views on the vast array of subdomains of AI applied to health. Research in AI has reached a point where its applications in everyday life are as tangible as challenging to predict. Similar to other changes with high societal impacts, the debate on AI tends to polarise extreme views between those for whom there is a positive expectation of a wonderful new world and those who fear the opening of a Pandora's box. This polarization is mainly scientific at this stage, where scientists settle speculative ontological and epistemological arguments: will AI one day reach singularity and perform at a level indistinguishable from a human? Will this technology replace human labour or contribute to increasing its productivity? Perhaps more dystopian, yet warranting a sound philosophical discussion: could AI take control and displace us of our agency?1 Such discussion is overly exaggerated when considering advancements such as ChatGPT, which resort to Large Language Models (LLM) that are, in essence, probability models that build on top of what humans once created. However, as commercial interests in AI increase, the stage of political polarization about permissions and restrictions in various domains of daily collective life will increase. This goes from countries' geopolitical, economic, and military relations to the production, marketing, distribution, and consumption of goods and services; the planning and management of public and private companies and infrastructures; labour markets, relations, and working conditions; and the production, use, and verification of mass information. No less important, polarization is expected to reach public opinion and the way people manage their lives and interactions with others and institutions. A path of greater dialogue must be built before polarization reaches the political and socio-economic realms. Some may argue that this polarization is likely inevitable due to the transformative impact of AI. If this is the case, it reinforces even more the binding and guiding role of science in times of uncertainty, notably when uncertainty is high and cuts across all domains of collective life. Looking at the scientific debate so far, the signs still need to reveal the necessary search for a dialogue between those for and against AI. It is this contribution of science that this editorial calls attention to regarding health. It is up to scientists to leverage the debate in the most transparent, critical, and enlightened way so that other institutions, policymakers, and the public understand the meaning of AI, its uses, risks, and potentialities. To further discuss AI, setting a common ground of understanding regarding its meaning, controversies, and taxonomies is necessary. Scientists must ensure the clarity of terms and definitions so that others can understand what is at stake and how to think of practical uses. In its most simple definition, AI are systems that think and act like humans. It is the domain of computer science dedicated to developing systems or machines that emulate human-like cognitive functions, such as learning, problem-solving, perception, and decision-making.2 Artificial Intelligence processes extensive data, recognises patterns, and adapts to novel inputs. Many Machine Learning (ML) methods fall within the realm of AI, as they involve designing algorithms capable of learning from and making predictions or decisions based on data without explicit programming.3 What fits into the definition of AI and the definition itself is a long-lasting discussion. For some, narrow AI — systems designed to perform specific tasks such as radiomics (i.e., where clinically relevant features are extracted out of medical images) or demand prediction (i.e., applying statistical methods or ML to predict patient arrivals at the ER) should not be defined as AI.4 According to this stance, only systems with human-like intelligence across a broad range of tasks can be considered AI or General AI.5 Another controversy is whether rule-based systems that follow pre-defined algorithms and heuristics should fall within the AI umbrella.6 In such cases, mathematical programming models that can allocate surgeries and surgeons to operating rooms would also fall within the definition of AI. Others argue that such systems are far behind General AI and that some form of generative intelligence that can learn, adapt and improve independently without requiring additional programming is a prerequisite of AI.7 Finally, the debate revolves around whether AI can be defined by the capacity to perform specific tasks or by its ability to mimic human behaviour (again, the discussion between narrow and general AI resurges). More canonical authors argue that only systems capable of generalisation and exhibiting human-like cognitive processes should be considered AI.7 Defining what is not AI is substantially less controversial. Scholars generally agree that traditional programming in the form of software applications or websites, among other common uses, is not AI. However, some AI may be embedded (for instance, chatbots). Also, data storage and databases fall outside the scope of AI, as well as networking and communication protocols. Likewise, markup and programming languages are not AI, although they may be used to code AI. Finally, operating systems and simple algorithms such as the ones used for sorting or searching, which are key in computer science, also do not involve learning or exhibit any intelligent behaviour that goes beyond what was coded into the algorithm.2, 8 Due to such controversies, the debate on AI applications can only move forward if a minimum ground of understanding about the taxonomies of learning processes is set (see Figure 1).9 The taxonomies of artificial intelligence. Machine Learning is a subset of AI that develops algorithms that can improve themselves over time through data-driven learning. Deep Learning (DL), in turn, is a subset of ML primarily concerned with neural networks with multiple layers. These complex structures mimic the brain's functioning and allow DL models to learn from large datasets and perform astonishingly well on tasks such as image recognition, speech recognition, natural language processing, and more. The system learns by example. Tools of DL are increasingly present in everyday life through LLM like ChatGPT, which can comprehend and generate human-like text based on voluminous amounts of training data. Furthermore, DL can be further disaggregated based on the underlying technology used. Convolutional Neural Networks are primarily used for analysing visual images. They are specifically designed to learn spatial hierarchies of features automatically and adaptively from images, which can be particularly useful when analysing artefacts such as cancers. Recurrent Neural Networks are used where the sequence of the data matters, such as time series analysis, speech recognition, or language modelling. They have feedback loops to allow information persistence. Generative Adversarial Networks are composed of two neural networks contesting each other. This setup is used for generating new data that is similar to the input data. They are widely used in image generation. Finally, Transformers are a type of neural network architecture that is particularly well-suited for processing ordered data sequences, making them effective for natural language processing tasks. Reinforcement Learning is another subset of ML, which deals with agents who take actions in an environment to maximise some notion of cumulative reward. In Reinforcement Learning, an agent makes observations and takes actions within an environment; in return, it receives rewards. AI-driven approaches have been successfully applied in numerous aspects of healthcare, such as diagnostics, drug discovery, personalised medicine, virtual health assistants, and clinical decision support, among many more.10-14 The integration of AI in healthcare holds great promise for enhancing patient care and advancing medical research. Artificial Intelligence is being used for screening skin lesions or polyps in the lower intestine for malignity, to help predict recurrences in cancer, or to detect artefact images or suspected lesions in Rx or other clinical radiology imaging procedures. The examples of applicability are immense. Based on the conceptual and terminological clarification about AI, the main areas in health that may undergo transformations in the coming years due to AI are listed below (Figure 2). Examples of applications of artificial intelligence in clinical uses and research with possible implications in health policy, planning and management. While it is not an exhaustive listing, it maps out much of what may be at stake with the deepening of AI in healthcare. These are areas that the International Journal of Health Planning and Management is particularly interested in highlighting, with the underlying focus on applications and implications of AI in policy, planning and management. So that decision-makers are supported, and public opinion is duly informed in day-to-day life, the scientific debate needs to advance in publishing protocols, pilot studies, scaling experiments, and adequately evaluating processes and results of AI-based solutions in healthcare. Controversies are expected to continue, making greater clarity and reasoning through real experiences and not speculative arguments even more important. At this stage, scholars must understand how to engage with different audiences, including those who already are engaged or want to engage with AI, those who want to know more about AI without any particular interest, and those who are expected to make decisions. Mário Amorim Lopes: PhD (appl. Healthcare Management and Economics), Assistant Professor at Faculty of Engineering of University of Porto, Invited Assistant Professor at Faculty of Economics of University of Porto, Lecturer at Porto Business School, Senior Reseacher at INESC-TEC. Henrique Martins MD, PhD, MLaw, FIAHSI, Auxiliary Professor in Health Management and Leadership at Faculdade de Ciências da Saúde-Universidade da Beira Interior, Associate Professor in Health Systems and Policies and Digital Health at ISCTE-IUL, and HL7 Europe Foundation, Board of Directors. Tiago Correia PhD hab, Associate Professor of International Health at NOVA-Institute of Hygiene and Tropical Medicine (Portugal), and Editor-in-Chief of International Journal of Health Planning and Management.