Loading

Editorial Open Access
Volume 4 | Issue 1

Usage of Artificial Intelligence in Biomedical Publishing

  • 1Institute of Molecular Pathobiochemistry, Experimental Gene Therapy, and Clinical Chemistry (IFMPEGKC), RWTH University Hospital Aachen, Pauwelsstr. 30, D-52074 Aachen, Germany
  • 2AOU Modena, Ospedale Civile di Baggiovara (-2023), Modena, 41100, Italy
+ Affiliations - Affiliations

*Corresponding Author

Ralf Weiskirchen, rweiskirchen@ukaachen.de

Received Date: November 20, 2025

Accepted Date: November 30, 2025

Abstract

Artificial intelligence (AI) has become an essential tool in modern biomedical research. However, its increasing role in manuscript preparation raises urgent ethical and practical questions. This editorial discusses the main AI modalities currently influencing molecular biology and genetics machine learning, deep learning, natural language processing, and large language models. It assesses their ability to improve efficiency, enhance data analysis, and democratize language editing. Additionally, it highlights significant risks, such as fake references, unintentional plagiarism, and the reduction of human accountability. To maintain scientific rigor, journals should establish clear guidelines AI-assisted language editing, data analysis, and figure generation are allowed with full disclosure and human verification. Undisclosed AI-generated text, fake data, and attributing AI tools as co-authors are strictly prohibited. By requiring transparent reporting, immediate archiving, and ongoing human oversight, journals should aim to leverage the advantages of AI while upholding the integrity of the scholarly record.

Keywords

Artificial intelligence, Biomedical publishing, Machine learning, Large language models, Publication ethics, Scientific integrity

Main Body

From curiosity to cornerstone: the rise of Artificial intelligence (AI) in biomedical publishing

AI is no longer just a laboratory curiosity. It has become a cornerstone of contemporary biomedical research, healthcare diagnosis, treatment, monitoring, and disease prevention [1,2]. Machine learning (ML) algorithms classify cell types [3,4], deep learning (DL) networks predict protein structures [5], natural language processing (NLP) engines mine literature [6], and large language models (LLMs) cab draft conference abstracts in seconds [7]. Reinforcement learning (RL) is starting to guide adaptive clinical workflow and trials [8], while generative diffusion models create graphical abstracts and videos on demand [9,10]. Each of these sub-disciplines (ML, DL, NLP, LLMs, and RL) offers researchers unprecedented speed and scale. However, the same tools that accelerate discovery and facilitate the conceptualization of academic articles also challenge fundamental notions of authorship, accountability, and scientific rigor [11,12].

In this editorial, we reflect on the duality of integrating AI into manuscript preparation. Three questions stand out prominently 1) What benefits justify integrating AI into manuscript preparation? 2) Which risks must be mitigated? 3) And how can authors, reviewers, and editors navigate the blurred boundaries between human insight and machine assistance?

The most immediate attraction of AI is efficiency. Non-native English speakers can use LLM-powered grammar checkers to polish prose that might otherwise require expensive editorial services. Automated reference managers extract citation metadata, while NLP tools flag missing DOI numbers and inconsistent gene nomenclature. In image-heavy disciplines, DL pipelines detect subtle phenotypes on microscopy slides that escape the human eye, turning terabytes of pixels into quantifiable datasets ready for publication. Properly documented, these advances raise the bar for rigor and transparency every analytical decision, from hyperparameter selection to validation metrics, can be archived and reproduced.

However, generative AI introduces a new class of hazards. “Hallucinations,” the inadvertent fabrication of references or data, can infiltrate a manuscript. Proprietary phrases may regurgitate verbatim, triggering plagiarism checks. Black-box reasoning makes it difficult, if not impossible, to explain why a neural network reached a particular conclusion, undermining the central demand of peer reviewers for reproducibility. Finally, the temptation to delegate entire sections of writing to an LLM threatens to erode the very idea of scientific authorship, where intellectual contribution and accountability are tightly connected.

Promise meets precaution: benefits, risks, and responsibilities

The promise of AI is best understood through concrete scenarios:

  • Language refinement: Correcting grammar, improving style, and translating technical jargon, as long as no scientific content is altered without human approval.
  • Data analysis: Using ML- or DL-based pipelines for classification, clustering, or predictive modeling, with full disclosure of algorithms, training data, and validation metrics.
  • Visual augmentation.
  • Creating AI-generated figures or graphical abstracts clearly labeled as “AI-generated” for easy comprehension without misleading readers.

However, there are also concrete risks to consider:

  • Fabrication: LLMs may create fake numerical results or nonexistent clinical trials.
  • Plagiarism: AI could reproduce copyrighted text, potentially exposing authors to legal and ethical consequences.
  • Privacy issues: Uploading a submitted manuscript during the review process to public AI software can breach confidentiality, violate intellectual property rights, and cause data leakage or unauthorized collection of personal information. The AI tool may also store and reuse the manuscript, further compromising its confidentiality [13]. 
  • Opaque reasoning: If a decision-making DL model process cannot be explained, reviewers cannot verify conclusions.
  • Dilution of authorship: Including an AI tool as a co-author is inappropriate since AI cannot take responsibility, declare conflicts of interest, or provide informed consent.

As a result, human authors bear full responsibility for every sentence, number, and image published in a journal. Failure to disclose AI usage could be treated as a breach of publication ethics, with the same consequences as data fabrication or plagiarism. This includes editorial investigation, potential retraction, and notification of institutional authorities.

Practical guidelines: what authors may - and may not -do

In the view of the authors of this editorial, an editorial board should adopt a four level policy framework:

Level 1: Permitted uses

  • Language polishing, synonym replacement, and minor stylistic edits via AI tools.
  • Drafting non-technical materials such as cover letters or lay summaries, subject to full human revision.
  • Generating visual content (e.g., pathway diagrams) with AI, if labeled and verified for accuracy.
  • Generative AI should be used primarily as a tool for brainstorming and suggesting image concepts.
  • Employing ML/DL algorithms for data analysis, with exhaustive methodological disclosure.

Level 2: Mandatory disclosure

Every manuscript must contain a statement in the Acknowledgments or Methods section such as: “Portions of this manuscript (language editing/figure generation/statistical analysis) were assisted by under human supervision.”

Level 3: Prohibited uses

  • Submitting AI-generated text, data, or images without explicit disclosure.
  • Listing AI tools as manuscript co-authors.
  • Relying solely on AI-generated claims or references without independent human verification.
  • Using AI to fabricate or manipulate data, images, or patient information.
  • Employing public AI to peer review submitted manuscripts.

Level 4: Accountability and record-keeping

Human authors must archive AI prompt histories, model parameters, and validation results in accordance with Findable, Accessible, Interoperable, and Reusable (FAIR) data principles [14]. These records should be retained for the period mandated by institutional or funder policies and made available to reviewers upon request.

A living policy: Monitoring, dialogue, and continuous improvement

Technology evolves and so must editorial standards. Supplementary Table 1 provides a template for cataloging how various journals are responding to the growing impact of AI in scientific writing. We invite librarians, editors, and authors to fill it out and share updates, transforming this editorial into a dynamic resource for the community. It is important to establish a general, widely-accepted framework for using AI-based tools in biomedical research.

In conclusion, AI is neither a savior nor a saboteur, but a powerful tool whose value is determined by the integrity of those who use it. When used responsibly, AI can increase access to high-quality writing support, reveal hidden biological patterns, and improve the reproducibility of experimental workflows. However, if misused, it can pose significant risks to the credibility of scientific literature. Therefore, attempts to foster innovation while protecting against misuse will help to maintain the highest standards of transparency and accountability for authors, reviewers, and journals.

Acknowledgements

During the preparation of this manuscript, the author(s) used the free online tool “Edit My English” to enhance readability and ensure that the language is free of errors in grammar, spelling, punctuation, and tone. Moreover, the authors employed Microsoft Copilot to help identify the most relevant references for some of their statements. Following the use of these tools, the authors reviewed and edited the content as necessary, taking full responsibility for the content of the published article.

Conflict of Interest

A.L. is one of the editors of Archives of Molecular Biology and Genetics. However, he was not involved in any steps of the editorial processing of this manuscript. R.W. has nothing to declare.

Authors Contributions Statements

Both authors (A.L. and R.W.) made substantial contributions to the conception and design of the study, as well as performing data analysis and interpretation.

References

1. Abu-El-Ruz R, Hasan A, Hijazi D, Masoud O, Abdallah AM, Zughaier SM, et al. Artificial Intelligence in Biomedical Sciences: A Scoping Review. Br J Biomed Sci. 2025 Aug 5; 82:14362.

2. da Silva RGL. The advancement of artificial intelligence in biomedical research and health innovation: challenges and opportunities in emerging economies. Global Health. 2024 May 21;20(1):44.

3. Le H, Peng B, Uy J, Carrillo D, Zhang Y, Aevermann BD, et al. Machine learning for cell type classification from single nucleus RNA sequencing data. PLoS One. 2022 Sep 23;17(9):e0275070.

4. Sun N, Wang Y, Shi X, Yang D, Wu R, Yau SS. scMFF: a machine learning framework with multiple feature fusion strategies for cell type identification. BMC Bioinformatics. 2025 Nov 18;26(1):277.

5. Krokidis MG, Koumadorakis DE, Lazaros K, Ivantsik O, Exarchos TP, Vrahatis AG, et al. AlphaFold3: An Overview of Applications and Performance Insights. Int J Mol Sci. 2025 Apr 13;26(8):3671.

6. Zhang R, Kastrin A, Hristovski D, Fiszman M, Kilicoglu H. NLP Applications—Biomedical Literature. InNatural Language Processing in Biomedicine: A Practical Guide. Cham: Springer International Publishing; 2024 Jun 9. pp. 351–95.

7. Wu S, Ma X, Luo D, Li L, Shi X, Chang X, et al. Automated literature research and review-generation method based on large language models. Natl Sci Rev. 2025 Apr 25;12(6):nwaf169.

8. Frommeyer TC, Gilbert MM, Fursmidt RM, Park Y, Khouzam JP, Brittain GV, et al. Reinforcement Learning and Its Clinical Applications Within Healthcare: A Systematic Review of Precision Medicine and Dynamic Treatment Regimes. Healthcare (Basel). 2025 Jul 19;13(14):1752.

9. Sordo Z, Chagnon E, Hu Z, Donatelli JJ, Andeer P, Nico PS, et al. Synthetic Scientific Image Generation with VAE, GAN, and Diffusion Model Architectures. J Imaging. 2025 Jul 26;11(8):252.

10. Po R, Yifan W, Golyanik V, Aberman K, Barron JT, Bermano A, et al. State of the art on diffusion models for visual computing. Computer Graphics Forum. 2024 May;43(2):e15063.

11. Resnik DB, Hosseini M. Disclosing artificial intelligence use in scientific research and publication: When should disclosure be mandatory, optional, or unnecessary? Account Res. 2025 Mar;24:1–13.

12. Kotsis KT. Redefining scientific authorship in the age of AI: Challenges for editors and institutions. European Journal of Innovative Studies and Sustainability. 2025 Sep 15;1(5):23–33.

13. Kemal Ö. Artificial Intelligence in Peer Review: Ethical Risks and Practical Limits. Turk Arch Otorhinolaryngol. 2025 Sep 26;63(3):108–9.

14. Wilkinson MD, Dumontier M, Aalbersberg IJ, Appleton G, Axton M, Baak A, et al. Comment: The FAIR Guiding Principles for scientific data management and stewardship. Sci. Data. 2016 Mar 15;3(1):1–9.

Author Information X