亲爱的研友该休息了!由于当前在线用户较少,发布求助请尽量完整的填写文献信息,科研通机器人24小时在线,伴您度过漫漫科研夜!身体可是革命的本钱,早点休息,好梦!

Assessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial Board

医学 放射科 综合医院 图书馆学 家庭医学 计算机科学
作者
David A. Bluemke,Linda Moy,Miriam A. Bredella,Birgit Ertl‐Wagner,Kathryn J. Fowler,Vicky Goh,Elkan F. Halpern,Christopher P. Hess,Mark L. Schiebler,Clifford R. Weiss
出处
期刊:Radiology [Radiological Society of North America]
卷期号:294 (3): 487-489 被引量:263
标识
DOI:10.1148/radiol.2019192515
摘要

HomeRadiologyVol. 294, No. 3 PreviousNext CommunicationsFree AccessFrom the EditorAssessing Radiology Research on Artificial Intelligence: A Brief Guide for Authors, Reviewers, and Readers—From the Radiology Editorial BoardDavid A. Bluemke , Linda Moy, Miriam A. Bredella, Birgit B. Ertl-Wagner, Kathryn J. Fowler, Vicky J. Goh, Elkan F. Halpern, Christopher P. Hess, Mark L. Schiebler, Clifford R. WeissDavid A. Bluemke , Linda Moy, Miriam A. Bredella, Birgit B. Ertl-Wagner, Kathryn J. Fowler, Vicky J. Goh, Elkan F. Halpern, Christopher P. Hess, Mark L. Schiebler, Clifford R. WeissAuthor AffiliationsFrom the Department of Radiology, University of Wisconsin Madison School of Medicine and Public Health, 600 Highland Dr, Madison, WI 53792 (D.A.B., M.L.S.); Department of Radiology, New York University, New York, NY (L.M.); Department of Musculoskeletal Radiology (M.A.B.) and Institute for Technology Assessment (E.F.H.), Massachusetts General Hospital, Boston, Mass; Department of Medical Imaging, Hospital for Sick Children, University of Toronto, Toronto, Canada (B.B.E.W.); Department of Radiology, University of California–San Diego, San Diego, Calif (K.J.F.); Department of Cancer Imaging, Division of Imaging Sciences & Biomedical Engineering, Kings College London, London, England (V.J.G.); Department of Radiology and Biomedical Imaging, University of California–San Francisco, San Francisco, Calif (C.P.H.); and Department of Radiology and Radiologic Science, The Johns Hopkins University School of Medicine, Baltimore, Md (C.R.W.).Address correspondence to D.A.B. (e-mail: [email protected]).David A. Bluemke Linda MoyMiriam A. BredellaBirgit B. Ertl-WagnerKathryn J. FowlerVicky J. GohElkan F. HalpernChristopher P. HessMark L. SchieblerClifford R. WeissPublished Online:Dec 31 2019https://doi.org/10.1148/radiol.2019192515MoreSectionsPDF ToolsImage ViewerAdd to favoritesCiteTrack CitationsPermissionsReprints ShareShare onFacebookTwitterLinked In IntroductionThe number of manuscripts related to radiomics, machine learning (ML), and artificial intelligence (AI) submitted to Radiology has dramatically increased in only a few years. As expected, the number of published articles in Radiology on these topics has also increased, now representing about 25% of publications in the past year.In response to this remarkable development, the RSNA and the Radiology editorial board have responded by adding associate editors with special expertise on these topics (https://pubs.rsna.org/page/radiology/edboard). The new RSNA journal, Radiology: Artificial Intelligence, is the only journal of its type focused on radiologic imaging and AI. Given the rapid expansion of AI in imaging, it is clear that physicians and scientists in our field cannot simply relegate understanding of AI to specialists. AI will eventually touch each specialty in our discipline. The recent statement on the ethics of AI from the RSNA, American College of Radiology, and other leading imaging societies suggests that AI technology will be so universal that all radiologists using these tools need to be involved in self-education on the topic (1).Articles in Radiology represent the state of the art in our field. As such, these publications are complex for the nonspecialist. Yet, this added complexity is at odds with our need to assess and evaluate topics in the AI field as they apply to our patients. The thinking of leaders in the field has evolved from initial apprehension about the “black box” nature of AI to a more critical appraisal: What makes a strong versus weak AI manuscript for Radiology? We already see advanced AI applications in breast and chest imaging. In these disciplines, large numbers of medical images are available with solid standards of reference used to train the AI algorithm. Other subspecialties in musculoskeletal disease or interventional radiology are newer to AI—yet AI seems likely to impact any medical application that uses any image of any sort.To help authors, reviewers, and readers, Radiology has published several articles to guide the evaluation of manuscripts related to AI (2,3). While comprehensive, these educational treatises also need to be operationalized. Much like a systemic search pattern for radiograph interpretation, it is helpful to keep in mind critical methodological points that contribute to making a sound manuscript on AI topics. Similar approaches have been developed in detail for observational and diagnostic accuracy studies in radiology (eg, STROBE and STARD guidelines). Ultimately, we expect that similar guidelines will be developed for manuscripts focused on AI in diagnostic imaging. While initial efforts have begun (eg, the TRIPOD statement [4]), current guidelines are cumbersome and attempt to cover AI applications in any and all fields of science and medicine.As an interim step, the Radiology editorial board has developed a list of nine key considerations that help us evaluate AI research (Table). The goal of these considerations is to improve the soundness and applicability of AI research in diagnostic imaging. These considerations are enumerated for the authors, but manuscript reviewers and readers may also find these points to be helpful:1. Carefully define all three image sets (training, validation, and test sets of images) of the AI experiment. As summarized by Park and Han (2), the AI algorithm is trained on an initial set of images according to a standard of reference. The training algorithm is tuned and validated on a separate set of images. Finally, an independent “test” set of images is used to report final statistical results of the AI. Ideally, each of the three sets of images should be independent, without overlap. Also, the inclusion and exclusion criteria for the dataset, in addition to the justification for removing any outlier, should be explained.2. Use an external test set for final statistical reporting. ML/AI models are very prone to overfitting, meaning that they work well only for images on which they were trained. Ideally, an outside set of images (eg, from another institution, the external test set) is used for final assessment to determine if the ML/AI model will generalize.3. Use multivendor images, preferably for each phase of the AI evaluation (training, validation, test sets). Radiologists are aware that MRI scans from one vendor do not look like those from another vendor. Such differences are detected by radiomics and AI algorithms. Vendor-specific algorithms are of much less interest than multivendor AI algorithms.4. Justify the size of the training, validation, and test sets. The number of images required to train an AI algorithm depends on the application. For example, an AI model may learn image segmentation after only a few hundred images, while thousands of chest radiographs may simultaneously be needed to detect lung nodules or multiple abnormalities. In their work classifying chest radiographs as normal or abnormal, Dunnmon et al (5) began with 200 000 chest images; however, their AI algorithm showed little benefit for improved performance after just 20 000 chest radiographs. For many applications, the “correct” number of images may be unknown at the start of the research. The research team should evaluate the relationship between the number of training images versus model performance. For the test set, traditional sample size statistical considerations can be applied to determine the minimum number of images needed.5. Train the AI algorithm using a standard of reference that is widely accepted in our field. For chest radiographs, a panel of expert radiologists interpreting the chest radiograph is an inferior standard of reference compared with the chest CT. Similarly, the radiology report is considered an inferior standard of reference relative to dedicated “research readings” of the chest CT scans. Although surprising to nonradiologists, this journal and other high-impact journals in our field do not consider the clinical report to be a high-quality standard of reference for any research study in our field, including AI. Clinical reports often have nuanced conclusions and are generated for patient care and not for research purposes. For instance, degenerative spine disease may have little significance at 80 years old but could be critical at age 15. Given the frequent requirement of AI for massive training sets (thousands of cases), the research team may find the use of clinical reports to be unavoidable. In that scenario, the research team should assess methods to mitigate the known lower quality of the clinical report when compared with dedicated research interpretations. For example, a research panel could audit a statistically valid subset of reports to determine the error rate of the radiology reports.6. Describe any preparation of images for the AI algorithm. For coronary artery disease on CT angiograms, did the AI interpret all 300 source images? Or did the authors manually select relevant images or crop images to a small field of view around the heart? Such preparation and annotation of images greatly affects radiologist understanding of the AI model. Manual cropping of tumor features is standard in radiomics studies; such studies should always evaluate the relationship of the size and reproducibility of the cropped volume to the final statistical result.7. Benchmark the AI performance to radiology experts. For computer scientists working on AI, competitions and leader boards for the “best” AI are common. Results frequently compare one AI to another based on the area under the receiver operating characteristic curve (AUC). However, to treat a patient, physicians are much more interested in the comparison of the AI algorithm to expert readers but not just any readers. Experienced radiologist readers are preferred to benchmark an algorithm designed to detect radiologic abnormalities. For example, when evaluating an AI algorithm to detect stroke on CT scans, expert neuroradiologists (rather than generalists or neurologists) are known to have the highest performance. The number of years of expertise and specialization should be documented in the research study. Comparison to trainees or nonexperts has value for some studies but not for evaluation of peak performance of the algorithm. When comparing the AI to expert radiologists, pathology results or, less optimally, an adjudication panel may be used as an external standard of reference.8. Demonstrate how the AI algorithm makes decisions. As indicated above, computer scientists conducting imaging research often summarize their results as a single AUC value. That AUC is compared with the competitor, the prior best algorithm. Unfortunately, the AUC value alone has little relationship to clinical medicine. Even a high AUC value of 0.95 may include an operating mode where 99 of 100 abnormalities are missed. To help clinicians understand the AI performance, many research teams overlay colored probability maps from the AI on the source images. Appropriate cut-points for clinically relevant sensitivity and specificity thresholds can be shown. So-called saliency maps can show the most important points on the image used by the AI algorithm for its decision making.9. The AI algorithm should be publicly available so that claims of performance can be verified (6). Just like MRI or CT scanners, AI algorithms need independent validation. Commercial AI products may work in the computer laboratory but have poor function in the reading room. “Trust but verify” is essential for AI that may ultimately be used to help prescribe therapy for our patients. All AI algorithms should be made publicly available via a website such as GitHub. Commercially available algorithms are considered publicly available.Key Considerations for Authors, Reviewers, and Readers of AI/ML Manuscripts in RadiologyAI algorithms are frequently thought of as a “black box.” This is unfortunate, suggesting radiologists may mentally forgo attempts to understand key metrics of the algorithm performance. Yet, the black box concept is prevalent throughout our field in different forms. For instance, radiologists only loosely understand the PET physics of electron and positron annihilation. Fourier reconstruction in MRI is also an entirely abstract concept to clinical radiologists. We may use monoenergetic reconstruction in CT daily but are only vaguely aware of equations used for material decomposition. In these examples, radiologists have always been deeply involved in understanding the clinical performance of modalities such as PET, MRI, and CT. Similarly, radiologists cannot sign off on our responsibility to understand an AI decision that ultimately affects our patients (1). Deep concern by radiologists about AI performance needs to be a major concern by our discipline as a whole.The Radiology editorial board looks forward to the development of more formalized publishing standards for AI/ML specifically related to radiologic imaging. Until that time, we hope the above points are useful as an initial guide to help authors, reviewers, and readers improve the robustness of their scientific results in artificial intelligence and machine learning in radiology.Disclosures of Conflicts of Interest: D.A.B. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: payment for lectures, International Diagnostics Course Davos; travel expenses, Korean Society of Radiology; ACRIN Data Safety Monitoring Board. Other relationships: disclosed no relevant relationships. L.M. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: Siemens research grant; personal fees from Lunit Insight; meeting/travel expenses paid by Chinese Congress of Radiology and Society of Breast Imaging; iCAD advisory board. Other relationships: disclosed no relevant relationships. M.A.B. disclosed no relevant relationships. B.B.E.W. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: spouse is an employee of Siemens Healthineers. Other relationships: disclosed no relevant relationships. K.J.F. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: research agreement from 12 Sigma; consultant for Medscape; grant support from Bayer, Pfizer, and GE; consultant with Nuance. Other relationships: disclosed no relevant relationships. V.J.G. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: Siemens Healthineers, research agreement with institution and funding for PhD student. Other relationships: disclosed no relevant relationships. E.F.H. disclosed no relevant relationships. C.P.H. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: travel expenses, Korean Congress of Radiology annual meeting; travel and lodging, EUROKONGRESS 18th MRI Symposium; personal fees, Focused Ultrasound Foundation. Other relationships: disclosed no relevant relationships. M.L.S. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: Shareholder, Stemina Biomarker Discovery, Healthmyne, X-Vac. Other relationships: disclosed no relevant relationships. E.R.W. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: grant support to institution from Siemens Healthcare, BTG, Medtronic; shared IP, Siemens Healthcare, BTG. Other relationships: disclosed no relevant relationships.References1. Geis JR, Brady AP, Wu CC, et al. Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement. Radiology 2019;293(2):436–440. Link, Google Scholar2. Park SH, Han K. Methodologic Guide for Evaluating Clinical Performance and Effect of Artificial Intelligence Technology for Medical Diagnosis and Prediction. Radiology 2018;286(3):800–809. Link, Google Scholar3. Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist’s Guide. Radiology 2019;290(3):590–606. Link, Google Scholar4. Moons KG, Altman DG, Reitsma JB, et al. Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): explanation and elaboration. Ann Intern Med 2015;162(1):W1–W73. Crossref, Medline, Google Scholar5. Dunnmon JA, Yi D, Langlotz CP, Ré C, Rubin DL, Lungren MP. Assessment of Convolutional Neural Networks for Automated Classification of Chest Radiographs. Radiology 2019;290(2):537–544. Link, Google Scholar6. Bluemke DA. Editor’s Note: Publication of AI Research in Radiology. Radiology 2018;289(3):579–580. Link, Google ScholarArticle HistoryReceived: Nov 12 2019Accepted: Nov 15 2019Published online: Dec 31 2019Published in print: Mar 2020 FiguresReferencesRelatedDetailsCited BySynthetic Images Are Here to StayMark L. Schiebler, Carri Glide-Hurst, 5 July 2023 | Radiology, Vol. 308, No. 1Explainable AI for Prostate MRI: Don’t Trust, VerifyJulius Chapiro 11 April 2023 | Radiology, Vol. 307, No. 4Interactive Explainable Deep Learning Model Informs Prostate Cancer Diagnosis at MRICharlie A. Hamm, Georg L. Baumgärtner, Felix Biessmann, Nick L. Beetz, Alexander Hartenstein, Lynn J. Savic, Konrad Froböse, Franziska Dräger, Simon Schallenberg, Madhuri Rudolph, Alexander D. J. Baur, Bernd Hamm, Matthias Haas, Sebastian Hofbauer, Hannes Cash, Tobias Penzkofer, 11 April 2023 | Radiology, Vol. 307, No. 4Performance and Usability of Code-Free Deep Learning for Chest Radiograph Classification, Object Detection, and SegmentationSamantha M. Santomartino, Nima Hafezi-Nejad, Vishwa S. Parekh, Paul H. Yi, 15 February 2023 | Radiology: Artificial Intelligence, Vol. 5, No. 2Methods for Clinical Evaluation of Artificial Intelligence Algorithms for Medical DiagnosisSeong Ho Park, Kyunghwa Han, Hye Young Jang, Ji Eun Park, June-Goo Lee, Dong Wook Kim, Jaesoon Choi, 8 November 2022 | Radiology, Vol. 306, No. 1Looking Back at 2022 and ahead to 2023 for the Korean Journal of RadiologySeong HoPark2023 | Korean Journal of Radiology, Vol. 24, No. 1Artificial intelligence in adrenal imaging: A critical review of current applicationsMaximeBarat, MartinGaillard, Anne-SégolèneCottereau, Elliot K.Fishman, GuillaumeAssié, AnneJouinot, ChristineHoeffel, PhilippeSoyer, AnthonyDohan2023 | Diagnostic and Interventional Imaging, Vol. 104, No. 1m2ABQ—a proposed refinement of the modified algorithm-based qualitative classification of osteoporotic vertebral fracturesH. L.Aaltonen, M. K.O’Reilly, K. F.Linnau, Q.Dong, S. K.Johnston, J. G.Jarvik, N. M.Cross2023 | Osteoporosis International, Vol. 34, No. 1Artificial intelligence in diagnostic and interventional radiology: Where are we now?TomBoeken, JeanFeydy, AugustinLecler, PhilippeSoyer, AntoineFeydy, MaximeBarat, LoïcDuron2023 | Diagnostic and Interventional Imaging, Vol. 104, No. 1Clarifications regarding convolutional neural networks-based automatic segmentation of pharyngeal airway sectionsKajaMohaideen, AnuragNegi, KarthikSennimalai2023 | American Journal of Orthodontics and Dentofacial Orthopedics, Vol. 163, No. 2Must-have Qualities of Clinical Research on Artificial Intelligence and Machine LearningBurakKoçak, RenatoCuocolo, Daniel Pinto dosSantos, ArnaldoStanzione, LorenzoUgga2023 | Balkan Medical Journal, Vol. 40, No. 1Systematic review of machine learning-based radiomics approach for predicting microsatellite instability status in colorectal cancerQiangWang, JianhuaXu, AnrongWang, YiChen, TianWang, DanyuChen, JiaxingZhang, Torkel B.Brismar2023 | La radiologia medica, Vol. 128, No. 2Evaluación metodológica de las revisiones sistemáticas basadas en la utilización de sistemas de inteligencia artificial en radiografía de tóraxJ.Vidal-Mondéjar, L.Tejedor-Romero, F.Catalá-López2023 | RadiologíaDeep Learning-Based Computed Tomography Image Standardization to Improve Generalizability of Deep Learning-Based Hepatic SegmentationSeul BiLee, YoungtaekHong, Yeon JinCho, DawunJeong, JinaLee, Soon HoYoon, SeunghyunLee, Young HunChoi, Jung-EunCheon2023 | Korean Journal of Radiology, Vol. 24, No. 4Checklist for Artificial Intelligence in Medical Imaging Reporting Adherence in Peer-Reviewed and Preprint Manuscripts With the Highest Altmetric Attention Scores: A Meta-Research StudyUmasehSivanesan, KayWu, Matthew D. F.McInnes, KiretDhindsa, FatemeSalehi, Christian B.van der Pol2023 | Canadian Association of Radiologists Journal, Vol. 74, No. 2Assessment of Radiology Artificial Intelligence Software: A Validation and Evaluation FrameworkWilliamTanguay, PhilippeAcar, BenjaminFine, MohamedAbdolell, BoGong, AlexandreCadrin-Chênevert, CarlChartrand-Lefebvre, JeanChalaoui, AndreiGorgos, Anne Shu-LeiChin, JuliePrénovault, FrançoisGuilbert, LaurentLétourneau-Guillon, JaronChong, AnTang2023 | Canadian Association of Radiologists Journal, Vol. 74, No. 2Quantitative Imaging Metrics for the Assessment of Pulmonary Pathophysiology: An Official American Thoracic Society and Fleischner Society Joint Workshop ReportConnie C. W.Hsia, Jason H. T.Bates, BastiaanDriehuys, Sean B.Fain, Jonathan G.Goldin, Eric A.Hoffman, James C.Hogg, David L.Levin, David A.Lynch, MatthiasOchs, GraceParraga, G. KimPrisk, Benjamin M.Smith, MerrynTawhai, Marcos F.Vidal Melo, Jason C.Woods, Susan R.Hopkins2023 | Annals of the American Thoracic Society, Vol. 20, No. 2Artificial intelligence algorithms aimed at characterizing or detecting prostate cancer on MRI: How accurate are they when tested on independent cohorts? – A systematic reviewOlivierRouvière, TristanJaouen, PierreBaseilhac, Mohammed LamineBenomar, RaphaelEscande, SébastienCrouzet, RémiSouchon2023 | Diagnostic and Interventional Imaging, Vol. 104, No. 5Critical Appraisal of Artificial Intelligence–Enabled Imaging Tools Using the Levels of Evidence SystemN.Pham, V.Hill, A.Rauschecker, Y.Lui, S.Niogi, C.G.Fillipi, P.Chang, G.Zaharchuk, M.Wintermark2023 | American Journal of Neuroradiology, Vol. 44, No. 5DWI-Based Radiomics Predicts the Functional Outcome of Endovascular Treatment in Acute Basilar Artery OcclusionX.Zhang, J.Miao, J.Yang, C.Liu, J.Huang, J.Song, D.Xie, C.Yue, W.Kong, J.Hu, W.Luo, S.Liu, F.Li, W.Zi2023 | American Journal of Neuroradiology, Vol. 44, No. 5Artificial intelligence in neuroradiology: a scoping review of some ethical challengesPegahKhosravi, MarkSchweitzer2023 | Frontiers in Radiology, Vol. 3Feasibility of Bone Mineral Density and Bone Microarchitecture Assessment Using Deep Learning With a Convolutional Neural NetworkKazukiYoshida, YukiTanabe, HikaruNishiyama, TakuyaMatsuda, HidetakaToritani, TakuyaKitamura, ShinichiroSakai, KunihikoWatamori, MasakiTakao, EizenKimura, TeruhitoKido2023 | Journal of Computer Assisted Tomography, Vol. 47, No. 3A supervised classification phenotyping approach using machine learning for patients diagnosed with primary breast cancerAhmadBashir, UllahBurhan, SardarFouzia, JunaidHazrat, Zaman KhanGul2023 | i-manager's Journal on Computer Science, Vol. 11, No. 1Radiomics and Radiogenomics of Ovarian CancerCamillaPanico, GiacomoAvesani, KonstantinosZormpas-Petridis, LeonardoRundo, CamillaNero, EvisSala2023 | Radiologic Clinics of North America, Vol. 61, No. 4Decentralized federated learning through proxy model sharingShivamKalra, JunfengWen, Jesse C.Cresswell, MaksimsVolkovs, H. R.Tizhoosh2023 | Nature Communications, Vol. 14, No. 1A Global Review of Publicly Available Datasets Containing Fundus Images: Characteristics, Barriers to Access, Usability, and GeneralizabilityTomaszKrzywicki, PiotrBrona, Agnieszka M.Zbrzezny, Andrzej E.Grzybowski2023 | Journal of Clinical Medicine, Vol. 12, No. 10Neural network algorithm for detection of erosions and ankylosis on CT of the sacroiliac joints: multicentre development and validation of diagnostic accuracyThomasVan Den Berghe, DaniloBabin, MinChen, MartijnCallens, DenimBrack, HelenaMaes, JanLievens, MarieLammens, MaximeVan Sumere, LieveMorbée, SimonHautekeete, StijnSchatteman, TomJacobs, Willem-JanThooft, NeleHerregods, WouterHuysse, Jacob L.Jaremko, RobertLambert, WalterMaksymowych, FrederiekLaloo, XenofonBaraliakos, Ann-SophieDe Craemer, PhilippeCarron, FilipVan den Bosch, DirkElewaut, LennartJans2023 | European RadiologyArtificial intelligence: is it more accurate than endodontists in root canal therapy?MohammedMurad, FalehTamimi2023 | Evidence-Based DentistryA survey of ASER members on artificial intelligence in emergency radiology: trends, perceptions, and expectationsAnjaliAgrawal, Garvit D.Khatri, BhartiKhurana, Aaron D.Sodickson, Yuanyua
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
大幅提高文件上传限制,最高150M (2024-4-1)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
大个应助科研通管家采纳,获得10
6秒前
小马甲应助科研通管家采纳,获得30
6秒前
雪糕考研完成签到 ,获得积分10
7秒前
36秒前
liufinity发布了新的文献求助10
43秒前
沧海云完成签到 ,获得积分10
1分钟前
Akim应助EmmaZ采纳,获得10
2分钟前
Frank应助地尔硫卓采纳,获得50
2分钟前
2分钟前
EmmaZ发布了新的文献求助10
2分钟前
EmmaZ完成签到,获得积分10
2分钟前
派大星完成签到 ,获得积分10
3分钟前
gwbk完成签到,获得积分10
4分钟前
4分钟前
YUYUYU发布了新的文献求助10
4分钟前
cy0824完成签到 ,获得积分10
4分钟前
务实的罡完成签到,获得积分10
5分钟前
学习吧完成签到 ,获得积分10
5分钟前
席江海完成签到,获得积分10
5分钟前
我是老大应助科研通管家采纳,获得10
6分钟前
SciGPT应助Wei采纳,获得10
6分钟前
HAG发布了新的文献求助10
6分钟前
乐乐乐乐乐乐应助大喜子采纳,获得10
7分钟前
赘婿应助Wei采纳,获得10
7分钟前
朴实芷云完成签到,获得积分20
7分钟前
大喜子给大喜子的求助进行了留言
7分钟前
打打应助朴实芷云采纳,获得50
7分钟前
领导范儿应助HAG采纳,获得10
7分钟前
8分钟前
HAG发布了新的文献求助10
8分钟前
8分钟前
深情安青应助YUYUYU采纳,获得10
8分钟前
gszy1975发布了新的文献求助10
8分钟前
yi完成签到 ,获得积分10
9分钟前
共享精神应助Wei采纳,获得10
9分钟前
9分钟前
YUYUYU发布了新的文献求助10
9分钟前
爱静静应助科研通管家采纳,获得10
10分钟前
爱静静应助科研通管家采纳,获得10
10分钟前
爱静静应助科研通管家采纳,获得10
10分钟前
高分求助中
Evolution 10000
Sustainability in Tides Chemistry 2800
юрские динозавры восточного забайкалья 800
English Wealden Fossils 700
Diagnostic immunohistochemistry : theranostic and genomic applications 6th Edition 500
Mantiden: Faszinierende Lauerjäger Faszinierende Lauerjäger 400
PraxisRatgeber: Mantiden: Faszinierende Lauerjäger 400
热门求助领域 (近24小时)
化学 医学 生物 材料科学 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 基因 遗传学 催化作用 物理化学 免疫学 量子力学 细胞生物学
热门帖子
关注 科研通微信公众号,转发送积分 3154982
求助须知:如何正确求助?哪些是违规求助? 2805698
关于积分的说明 7865848
捐赠科研通 2463938
什么是DOI,文献DOI怎么找? 1311678
科研通“疑难数据库(出版商)”最低求助积分说明 629722
版权声明 601853