摘要
By the time you finish reading this editorial about artificial intelligence (AI), it will be outdated; the AI field is growing beyond the imagination of many. One cannot even look at the news without finding something about AI: duplication of voices (even John Lennon's years after his death), use of AI in art and music creation, facial recognition, and others. This new field is readily used in the management of big data in many arenas but is filled with risks and uncertainty (Botes 2023). If the world population declines in number (scientificamerican.com/article/population-decline-will-change-the-world-for-the-better/, accessed 19 Jun 2023), there will be even more demand for AI to replace needed skills once provided by humans. The last part of the sentence is important because AI focuses on things typically done by humans. Artificial intelligence is a real game changer and needs to be managed wherever it is used, including the scientific process. The purpose of this editorial is to briefly indicate how The Journal of Wildlife Management (JWM) will incorporate AI in publishing. Artificial intelligence has been defined as an umbrella term for an array of algorithmic-based technologies, albeit with their own glitches, that solve complex tasks, which previously involved human thought, to whatever has not been done yet (DeWaard 2023). Artificial intelligence is living up to the promise of delivering value, influenced by advances in the availability of relevant data, computation, and algorithms from agriculture (Smith 2018) to zoology (Santangeli et al. 2020, Tuia et al. 2022) and everything in between. Artificial intelligence is also advancing wildlife management and conservation such as the use of drones in combination with AI to locate bird nests (Santangeli et al. 2020) and an array of other uses in wildlife and animal ecology as reviewed by Tuia et al. (2022). Artificial intelligence can also be used for nefarious activities such as Deepfake AI, a type of AI used to create convincing images, audio, and video hoaxes. To combat these uses, countries in The European Union are proposing laws (e.g., the AI Act) to put guardrails on the rapidly growing AI uses for policymakers and others. The United States is doing the same and other countries will not be far behind. Concerns over the use of AI is also growing elsewhere and universities are developing programs to train a new cadre of professionals to deal with the rapid changes and development in AI in all arenas. The rapid emergence and scope of AI generative tools, like chat generative pre-trained transformer (ChatGPT), Scite (an AI tool that provides quantitative and qualitative insight into how scientific publications cite each other), and DALL-E (an AI system named after S. Dali and Pixar's WALL-E movie) that can produce realistic images from text prompts, are at the forefront of many higher education conversations across the nation (e.g., University of Arizona) that are surveying faculty to identify salient topics surrounding emergent AI tools. And, there are a bunch of them. Scientists are surrounded with digital tools and platforms. As the professional academic social networks continue to grow, there have been changes in the publishing world almost as fast as the emergence of AI. For example, many journals are online with open access or are moving in that direction, social media is used to promote research, and with the advent of Covid, societies moved from in-person conferences to virtual conferences. All of these changes and others were assumed to operate with scientific integrity, which should continue with the emergence of AI. In publishing, as in other arenas, AI relates to the ability for machines to learn patterns to do things that have typically been done by humans. Artificial intelligence is the next major frontier in the process and dissemination of scientific knowledge and has arrived without much fanfare, but it is here to stay (Bachanan 2023). Artificial intelligence has been used for analysis and writing, among other parts of the scientific method by drawing on current knowledge and content available online with various combinations of accuracy, biases, and errors (Bachanan 2023). Like Bachanan (2023), I know that AI is here to stay but will have to wait to see how successful it is and how its uses will influence publishing. With the rapid growth of AI, we will not have to wait long. Artificial intelligence is touching all aspects of the publication world including the writing of articles (e.g., PaperPal, Writefull), article submission (e.g., Wiley's ReX, which automatically extracts data from manuscripts), tools to screen manuscripts on submission (e.g., Penelope, RipetaReview), support of peer review (e.g., SciScore) for method checking, and checking scientific images (Proofig, ImageTwin). Need more examples of the emerging role of AI in publishing? Check out Scite.ai to see how citations support arguments in manuscripts and the use of AI in marketing, creating proofs, copyediting, summarization (i.e., Scholarcy), using published material as data, reading (e.g., SemanticScholar summarizes manuscripts in 1–2 sentences), checking on similarities between submissions (e.g., TurnItIn, STM), determining if the scientific process is properly followed, finding referees, and detecting plagiarism (e.g., Content Authenticity Initiative; DeWaard 2023). These are just some of the uses of AI in publishing. There are also pitfalls that need to be addressed including scientific fraud and the potential legal risks for policies and decisions made with AI. One of the greatest challenges in using AI tools in the scientific process is to ensure the truth and validity of results from experiments. If truth is not grounded in scientific evidence or presented with sufficient qualifications, the scientific method would be violated. Artificial intelligence tools are built on probabilistic models that do not have rigorous closed-form mathematical solutions and therefore cannot be independently validated. Further, the algorithms are typically designed and trained to optimize a problem (i.e., to get the highest score possible on a certain task). As such, GenerativeAI models (e.g., ChatGPT) could fabricate scientific references in generating scientific papers to support a conclusion. The obvious danger is that if these references get past reviewers and accepted into the body of scientific literature, they could be referenced by scores of future authors to support a similar conclusion. Yet, at the core, the reference and its corresponding confusion may have never existed in the real world. Imagine how this could negatively influence the scientific process and its effects on the world and humanity. Similar errors could also creep into conclusions when AI is used to detect the number of animals in satellite imagery. Artificial intelligence could be also used to generate purely synthetic imagery of events that never occurred to promote some cause by actors who want to influence the scientific process. For example, what if an endangered species that never existed in an area was shown to exist in the middle of a development project in a peer-reviewed scientific study? Also, how does one address a machine or hold a program responsible for faulty information? And, by combining data from various places and individuals into a single data set, how is diversity of thought maintained? Authors are responsible for the ethical treatment of animals and their data, and must be aware of all aspects of how their data are collected and used. These and other issues must be addressed as AI is incorporated in publishing activities. Thus, the Editor-in-Chief and staff of JWM must embrace the responsibility of managing AI in our publications to maintain research integrity and ensure that AI does not replace the expertise and critical thinking of humans. To foster this idea, JWM staff follows Wiley's guidelines in dealing with AI: “Artificial Intelligence Generated Content (AIGC) tools—such as ChatGPT and others based on large language models (LLMs)—cannot be considered capable of initiating an original piece of research without direction by human authors. They also cannot be accountable for a published work or for research design, which is a generally held requirement of authorship …, nor do they have legal standing or the ability to hold or assign copyright. Therefore—in accordance with COPE's position statement on AI tools—these tools cannot fulfill the role of, nor be listed as, an author of an article. If an author has used this kind of tool to develop any portion of a manuscript, its use must be described, transparently and in detail, in the Methods or Acknowledgements section. The author is fully responsible for the accuracy of any information provided by the tool and for correctly referencing any supporting work on which that information depends. Tools that are used to improve spelling, grammar, and general editing are not included in the scope of these guidelines. The final decision about whether use of an AIGC tool is appropriate or permissible in the circumstances of a submitted manuscript or a published article lies with the journal's editor or other party responsible for the publication's editorial policy.” (https://authorservices.wiley.com/ethics-guidelines/index.html#5, accessed 14 Jun 2023). I will be incorporating similar language in updated publication guidelines to explain how AI-generated material should be treated in JWM. The bottom line is that AI cannot be an author, any use of AI must be acknowledged (except for tools used to improve spelling, grammar, and general editing), and author(s) are responsible for all information in their manuscripts including data generated by AI. As AI develops and is used more in the scientific method and publication process, the policies we follow will also evolve to maintain the foundation of human thought, ethics, and the scientific method in published work. The Wildlife Society Code of Ethics states that members should understand “…human society's proper relationship with natural resources, and in particular for determining the role in wildlife in satisfying human needs and addressing the management of wildlife-related impacts” (https://wildlife.org/wp-content/uploads/2017/07/Code-of-Ethics-May-2017.pdf, accessed 28 Jun 2023). We should all keep abreast of technological advances, including AI, and understand the roles they play in managing our wildlife resources. Keep on publishing but keep AI off the author list. This editorial was improved with reviews from J. A. Bissonette, A. S. Cox, E. H. Merrill, K. A. Norris, and P. M. Wegner. Many thanks.