作者
Marek Jutel,Magdalena Zemelka‐Wiącek,Michał Ordak,Oliver Pfaar,Thomas Eiwegger,Maximilian Rechenmacher,Cezmi A. Akdiş
摘要
Artificial intelligence (AI) is the overarching field that aims to create intelligent machines and systems that can perform tasks that would otherwise require human intelligence. This term was first used in 1956 by American computer scientist John McCarthy.1 In 1950, the 'Turing test' was developed to assess whether a machine exhibits intelligent behaviour equivalent to, or indistinguishable from, that of a human. In essence, if a human judge cannot reliably distinguish between responses from a machine and a human in a blind interaction, the machine is considered to have passed the test.2 AI involves creating algorithms and systems that enable computers to learn, make decisions and perform tasks that typically require human intelligence, such as problem-solving, learning, natural language understanding and pattern recognition. AI systems can be rule-based (following a set of pre-defined instructions) or learning-based (adapt and learn from data). Under the umbrella of AI, machine learning (ML) focuses on designing algorithms that enable computers to learn from and make predictions or decisions based on data, without explicit programming. These algorithms can identify patterns, generalise from examples and adapt over time. Deep Learning (DL) is a subfield of ML that deals with artificial neural networks. These networks are inspired by the structure and function of the human brain and consist of multiple layers of interconnected nodes (neurons). DL algorithms can automatically learn to represent data by training on large amounts of labelled data and are particularly effective at tasks such as image recognition, natural language processing and speech recognition (Figure 1). AI plays a crucial role in modern healthcare. AI-powered tools can analyse patient data, including genetic information, environmental factors and medical records, to uncover potential allergy triggers and risk factors. This can facilitate early interventions and prevention strategies, improving patient outcomes. In research, AI is applied to investigate unsupervised vast amounts of data and has the potential to integrate. For example, AI can mine published literature, clinical trial data and patient records, identifying new insights into the mechanisms of allergic reactions and potential therapeutic targets. Moreover, AI can be used to develop predictive models that forecast the prevalence and severity of allergies in specific populations, potentially enabling better resource allocation and public health planning. AI's ability to handle complex data and continuously learn from it accelerates the discovery of novel treatments and therapies, ultimately enhancing the quality of care for allergy sufferers and contributing to the advancement of allergy research.3 Tens of new AI tools are being introduced to our lives every week. The rapid growth of AI tools is transforming how we live and work. As these tools evolve and improve, they will become even more influential in our daily lives. Despite the incredible progress made in the field, limitations to the assumptions of machine learning methods need to be considered. A good example is the representativeness of the sample and collinearity. A representative sample can generalise the results to the entire population. Collinearity means the linear correlation between independent variables. A strong relationship between the independent variables results in a cluster system that does not exist, and a lack of representativeness can reduce the results' reliability. One of the most common mistakes includes isolating several clusters without giving proper justification. The particular assumptions of machine learning methods should be an integral part of each research in which these methods are used.4, 5 With the increasing accessibility, AI may help the authors write medical articles by potentially enhancing the ability to analyse data, personalise content, detect errors and generate clear language and, in this way, speed up and control the process in a 'save-time' manner. This poses novel issues for scientific publications. However, ChatGPT,6 developed by OpenAI, one of the most popular tools, generates responses which might sound credible, but are incorrect or mistaken (Figure 2). Publishing such material without oversight from experts in the field who take responsibility for the accuracy of published articles can lower confidence in science with the potential to destabilise society. ChatGPT is an advanced language model that generates human-like text by predicting the next word in a sequence based on the previous context. The model has been trained with various internet texts but cannot access the documents in its training set or any personal, confidential or proprietary data unless explicitly communicated in conversation. As a result, it generates answers by computing the statistical probability of a word in its previous context rather than having a contextual understanding of the world or text it produces. This explains why it can sometimes output plausible but incorrect or nonsensical answers (Data S1). Regarding referencing, ChatGPT can provide inaccurate DOIs or citation information. This is because it does not access databases or the internet in real-time during the conversation to retrieve or check data. Instead, it generates text based on patterns learned during its training phase. Each citation it provides is a simulated output that mimics the citation style from the training data and does not reference a specific source. The lack of real-time data access and verification underscores the need for critical evaluation and fact-checking of AI-generated content, especially in scientific contexts. Given the model's limitations, the importance of human oversight and critical evaluation when using AI tools such as ChatGPT cannot be overstated, especially in the scientific domain. Its use should be complemented by rigorous statistical analysis, a thorough understanding of underlying assumptions and limitations, and strict adherence to ethical guidelines to avoid misleading interpretations and maintain public trust in AI applications in HCPs. Ethical concerns regarding patient privacy, data security and informed consent can also arise. Ensuring AI's responsible and ethical use in medical research is crucial to prevent potential patient harm and maintain public trust. By the publisher, Wiley & Sons, it has been clearly stated that to date, Artificial Intelligence Generated Content (AIGC) tools such as ChatGPT are not far enough evolved that they can be considered to provide originality to scientific work.7 Therefore, AIGC tools cannot qualify for authorship by fulfilling the generally accepted requirements, for example, given by the international committee of Medical Journal Editors (ICMJE).8 This also aligns with the Committee on Publication Ethics (COPE) position.9 The usage of AIGC must be clearly and transparently disclosed in the Methods and Acknowledgement section of scientific papers. The corresponding author is responsible for accurate reporting of AIGC usage, and all authors are responsible for the article's overall content. The journals' editors have full right to decide the impact of AIGC on the content finally and to accept or decline the respective article that has used AI tools. All authors contributed to the writing and editing of the manuscript. The authors declare that there are no conflicts of interest for this work. Data S1 Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.