Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
Editorial
Narrative Review
Perspective Article
Symposium Proceedings
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Filter by Categories
Editorial
Narrative Review
Perspective Article
Symposium Proceedings
View/Download PDF

Translate this page into:

Perspective Article
2025
:1;
3
doi:
10.25259/JHRE_4_2025

Biomedical research publication in the age of artificial intelligence: Current prospects for balancing integrity and innovation

Department of Periodontology, Saraswati Dental College and Hospital, Lucknow, Uttar Pradesh, India,
Center for Global Health Research, Saveetha Medical College and Hospitals, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, Tamil Nadu, India.
Research Institute of Oral Science, Nihon University School of Dentistry at Matsudo, Chiba, Japan.

*Corresponding author: Vivek Kumar Bains, Department of Periodontology, Saraswati Dental College and Hospital, 233, Tiwari Ganj, Ayodhya Road, Chinhat, Lucknow-226028, Uttar Pradesh, India. docvivek1976@gmail.com

Licence
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, transform, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.

How to cite this article: Bains VK, Bhawal UK. Biomedical research publication in the age of artificial intelligence: Current prospects for balancing integrity and innovation. J Healthc Res Educ. 2025;1:3. doi: 10.25259 /JHRE_4_2025

Abstract

Artificial intelligence (AI) is inevitably transforming scientific writing, peer review, and publication workflows. AI holds enormous potential to improve efficiency, equity, productivity, and editorial precision. However, their unregulated use poses risks to research integrity, authorship accountability, and data confidentiality. Its safe and ethical use requires vigilance, transparency, and human oversight to prevent bias, misinformation, and plagiarism while upholding the standards of scientific integrity for all. Journals, time to time, issue policies to define acceptable practices of AI use in manuscript preparation and review, that are limited to grammar, language refinement, and formatting support, while strictly prohibiting data generation, analysis fabrication, or peer-review automation. Therefore, understanding the ethics framework of international editors and publishers for the use of AI in scientific publications, which aims to uphold righteousness, comprehensiveness, and objectivity in healthcare publications, is of paramount importance.

Keywords

Artificial intelligence (AI)
Committee on publication ethics (COPE)
International committee of medical journal editors (ICMJE)
Scholarly publishing
The world association of medical editors (WAME)

INTRODUCTION

Artificial intelligence (AI) is defined as “technology that enables computers and machines to simulate human learning, comprehension, problem-solving, decision-making, creativity, and autonomy.”[1]

It encompasses various machine-generated computations and learning, with origins traceable to Alan Mathison Turing’s work in the 1950s. The term “artificial intelligence” was coined in 1956 during a conference organized by John McCarthy, Nathan Rochester, Marvin Minsky, and Claude Shannon of International Business Machine (IBM) Corporation. [2,3] It is one of the most transformative forces in modern healthcare. Yet, in clinical and academic settings where human lives and well-being are directly involved, its application demands heightened vigilance, empathy, and accountability. The healthcare professional’s role extends beyond research and publication. It carries an intrinsic societal responsibility to protect patient dignity, privacy, and safety.

AI differs from automation in that intelligent machines can mimic or exceed human behavior, rather than just completing processes based on finite tools and rules. Key AI processes include machine learning (detecting patterns from large datasets) and natural language processing (understanding, interpreting, and generating human language).[2] Generative AIs, such as those using large language models (LLMs) like ChatGPT (Generative Pre-trained Transformer), are particularly relevant to scientific publishing, as they can generate research ideas, write papers, and assist in various aspects of article production. This necessitates the establishment of ethical standards for authors, reviewers, editors, and publishers to keep pace with new tools while protecting the scientific community from incorrect science. [2,4]

AI has rapidly evolved from a computational novelty into a ubiquitous tool within the academic and scientific publishing landscape. Applications such as automated text generation, plagiarism detection, and manuscript proofreading have transformed how authors prepare, editors assess, and reviewers evaluate scientific content. However, this convenience comes with significant ethical, legal, and academic challenges. AI systems can introduce factual inaccuracies, unintentional plagiarism, and biased interpretations, while also raising concerns about data confidentiality when manuscripts are uploaded to public platforms. The global scholarly community has responded with an urgent need for clear ethical boundaries to preserve research integrity and public trust. Leading publishers and journals, including The Lancet, British Medical Journal (BMJ), New England Journal of Medicine (NEJM), Elsevier, Springer, and Journal of the American Medical Association (JAMA), have released formal statements outlining the preparation. Common to these principles are three permissible and prohibited uses of AI tools in manuscript foundational principles: transparency, human accountability, and data confidentiality.

In the medical and healthcare context, these issues become even more critical, as errors or ethical breaches may directly affect patient welfare and clinical decision-making. Journals stress that AI cannot replace human judgment, nor can it be credited as an author.[5] All use of AI must be disclosed, and content generated by algorithms must be verified by qualified professionals [Table 1].[6-16] Most reputed journals aligns with international publication ethics frameworks and regional data protection regulations, including those of Committee on Publication Ethics (COPE), the World Association of Medical Editors (WAME), and the International Committee of Medical Journal Editors (ICMJE), European Union’s General Data Protection Regulation (GDPR) and India’s Digital Personal Data Protection Act (DPDP 2023).[17-21] This paper aims to provide an informed perspective on the benefits, limitations, current evolving role, and ethical considerations of AI use in biomedical research publications.

Table 1: Summary of guidelines for AI use in medical and scholarly publication
Year Journal/organization Publication type Key provision on AI use
2023 Nature (Springer Nature) Editorial Prohibits AI as author; requires disclosure; allows limited language assistance
2023 Science Editorial Bans AI-generated content; insists on human accountability
2023 JAMA Editorial Preclude AI inclusion of non-human authors; ensure transparency in reporting AI use; discourage AI-generated image creation.
2023 Elsevier Editorial Disclose AI use in the writing process; Authors are ultimately responsible and accountable for the contents of the work.
2024 The Lancet Comment Mandate AI disclosure and ban data fabrication or peer-review automation
2024 BMJ Commentary Highlights risks of bias and misinformation; calls for transparency
2024 NEJM Perspective Integrating AI-generated text into electronic health records (EHRs) may erode the quality, accuracy, and human essence, resulting in confabulation, loss of clinical reasoning, data pollution (“model collapse”), and automation bias.
2025 NEJM Publication Policy Disclose any AI assistance, ensure full human accountability, and cannot list AI as an author; Authors must verify accuracy, originality, and attribution of all AI-generated material, and AI outputs cannot be cited as primary sources.
2025 JAMA Editorial Transparency, accountability, and confidentiality for use of AI; Disclosed, described, and verified by humans; Prohibited from uploading manuscripts to an AI tool; Responsible, ethical, and transparent integration of AI in authorship, peer review, and editorial processes.
2025 COPE/ WAME/ ICJME Position statements Provide an ethical framework that includes disclosure, no AI authorship, and maintains confidentiality.

JAMA: Journal of Medical Association, BMJ: British Medical Journal, NEJM: New England Journal of Medicine , COPE: Committee on Publication Ethics, WAME: World Association of Medical Editors, ICJME: International Committee of Medical Journal Editors

Considerate use of AI tools for authors

Authors are permitted to use AI tools for limited, well-defined functions that do not influence scientific conclusions or compromise the originality of their work. These include grammar and spelling corrections, copy-editing, stylistic refinement, translation of non English text, English enhancement, and assistance in reference formatting. Such uses are considered acceptable provided that authors verify all AI-generated suggestions before submission. The authors remain fully accountable for the accuracy and validity of the final content, including factual data, analyses, and interpretations. Authors can use offline or enterprise-grade licensed software that guarantees data privacy and prohibits data reuse for algorithm training. [4,14-18]

Journals explicitly prohibit the use of AI systems for generating primary data, performing analyses, interpreting results, or drafting substantial portions of scientific arguments without human oversight. Authors must not upload confidential or unpublished manuscripts to public AI interfaces that store or reuse inputs for training purposes. Any attempt to employ AI for authorship attribution or content fabrication constitutes serious academic misconduct. Authors are reminded that any inclusion of patient data or sensitive institutional material in AI systems without anonymization and prior approval may constitute a breach of legal confidentiality obligations. [4,14,16-18]

Generative AI systems (like ChatGPT, SciSpace, Thesis AI, Paperpal, Wordvice, Wordtune etc.) can not only produce extensive text, figures, or datasets, but also write research paper, review articles, and theses, with remarkable fluency and accuracy. While this ability enhances writing efficiency, it also introduces serious risks to research integrity and scientific concerns over authorship, accountability and bias. AI can generate text that mimics authentic scientific writing, but it is often based on incomplete, incorrect, or entirely fabricated information. Because LLMs remix existing content from their training data, they may inadvertently reproduce others’ ideas or language without attribution, resulting in unintentional plagiarism. Furthermore, when AI-generated outputs are presented as genuine experimental results or interpretations, it constitutes a misrepresentation of scientific work. [2,16,19,22]

“Paper mills” are illicit commercial entities that sell fabricated or manipulated research papers to authors seeking publication. With generative AI, these operations can now produce fake manuscripts at scale, using AI to write convincing text, generate false data, and even create images or tables that appear authentic. Such fraudulent submissions flood journals and threaten the credibility of peer-reviewed literature. Furthermore, advanced AI algorithms can create synthetic images, audio, or videos, known as deepfakes, that imitate real experimental data, patient scans, or even researchers themselves in scientific contexts. Deepfakes may falsify clinical images, microscopy data, or interview recordings, leading to false evidence that can mislead readers, distort scientific conclusions, and endanger patient safety when used in clinical research. [2,22,23] Therefore, maintaining the sanctity of academic authorship and ensuring that technological assistance complements, rather than compromises, scholarly rigoris essential. AI-based editing tools can enhance readability, reduce linguistic barriers for non-native English authors, and streamline administrative workflows. Nevertheless, without clear ethical boundaries, these same tools can distort authorship contributions, violate confidentiality agreements, and erode trust in the scientific literature.

Limitations of AI

AI tools are trained on vast datasets created by humans, which means they can inherit the same cultural or linguistic biases, such as those related to gender, race, or inclusivity, present in the source material. These biases may appear overtly or subtly, for instance, by favoring research in the majority language or mainstream academic perspectives.[2]

Although LLMs can simulate reasoning, they lack the human intuition, emotional intelligence, and contextual awareness of human thought, often missing subtle cues such as humor, irony, or metaphor. This limitation may restrict their capacity to contribute creatively or critically to complex scientific discussions, which can influence clinical decision-making because of limited possibilities for higher-level thinking.[2,24]

AI models are also prone to generating inaccuracies or hallucinations, producing authoritative-sounding outputs, statements, or references that appear plausible but are entirely false. Experiences with tools such as ChatGPT have shown that it may fabricate citations or alter publication dates, which can mislead researchers, raise ethical or legal concerns, and ultimately erode public confidence in scientific communication.[25] Because LLMs are designed to predict text patterns, they unintentionally reproduce phrases or ideas from existing literature without acknowledging the creator of the intellectual idea, resulting in inadvertent plagiarism or duplication of scholarly content.[26] The datasets used to train AI systems are derived from a mix of reliable and unreliable internet sources, which means that some generated information may be misleading, particularly in sensitive fields such as healthcare or clinical research, where errors could impact patient outcomes. Predictive algorithms are trained to identify recurring patterns, which can bias them towards conventional or widely accepted ideas while overlooking novel hypotheses. They may also rely on mainstream ideas that are based on incomplete or outdated data, thereby limiting innovation and perpetuating past biases.[2] Moreover, many AI tools function as “black-boxes”, with their datasets, architectures, and training processes kept proprietary by developers; this opacity hinders academic scrutiny and prevents users from evaluating potential bias or data quality, sometimes leading to unreliable or context-inappropriate outputs.[27]

Although AI can make publishing more inclusive in theory, without equitable access and localized training initiatives, it risks amplifying the very inequalities it aims to solve, by creating a digital divide within the global research ecosystem due to unequal access to training and infrastructure. [28-30]

Another critical limitation of frequently using AI in scientific writing is that convenient AI-driven content generation can diminish researchers’ engagement with primary and original literature, thereby weakening their skills in reviewing literature, verifying sources, and interpreting context. Core academic practices, such as lateral and vertical reading, which involve critically cross-checking information across diverse sources, are increasingly being replaced by superficial text synthesis. This shift ultimately affects essential cognitive abilities, including argument analysis, critical thinking, creativity, and scientific integrity.[31,32] Furthermore, the uncritical reliance on AI-generated text or data has contributed to the rise of low-quality biomedical research papers, characterized by repetitive phrasing, factual inaccuracies, and poorly substantiated claims.[33,34] This not only dilutes the credibility of scientific literature but also poses a challenge to editors and reviewers to distinguish genuine data from algorithmic output. To protect the intellectual rigor of biomedical science, researchers must therefore use AI only as an augmentative tool, rather than a surrogate for analytical reading, conceptual understanding, and scholarly authorship.

Responsible use and transparency

A systematic review conducted by Khalifa & Albadawy[35] across PubMed, Embase, and Google Scholar, selecting 24 studies published after 2019 that directly examined AI’s role in academic writing, editing, research design, and publishing, suggested six core domains in which AI can assist in manuscript writing. AI helps in brainstorming, identifying research gaps, and generating hypotheses by analyzingexisting data and literature to inform these processes. It can propose suitable methodologies, sample sizes, and statistical models, supporting more structured research planning. However, over-reliance on AI may divert research objectives or reduce human creativity.[35-37] AI tools, such as ChatGPT, improve writing fluency, expand text, predict terminology, and ensure logical flow through structuring and tone analysis. They also enhance visual and multimedia presentations via auto-generated figures, infographics, and presentations. However, ethical vigilance is needed to prevent misuse, such as fabricating or falsifying content. [38,39] The AI revolutionized literature reviews by extracting, summarizing, and synthesizing massive volumes of data through semantic analysis (e.g., www.scispace.com). It creates summary tables and comparative insights, offering more comprehensive overviews while saving time. However, researchers must validate accuracy and context to avoid misleading interpretations.[40] AI enhances data interpretation, visualization, and duration. Tools like Tableau, NVivo, and RapidMiner can analyze large datasets and generate accessible insights. However, maintaining data integrity and ethical standards during automation is of utmost importance.[41] AI tools, such as Grammarly, Paperpal, and ChatGPT, refine manuscripts by proofreading, summarizing abstracts, and assisting in drafting peer-review responses. They streamline submission tracking and correspondence, improving workflow efficiency while requiring ethical disclosure of AI use in manuscripts.[42] AI aids in dissemination and accessibility, including multilingual translation, social media outreach, and chatbot-based engagement, while also ensuring ethical compliance through plagiarism detection, integrity checks, and adherence to research ethics frameworks. However, transparency and disclosure of AI usage are strongly recommended.[43]

Global organizations, such as the COPE, WAME, and ICMJE, are actively developing guidelines to promote transparency and accountability in the responsible use of AI within research and publishing. LLMs and other AI tools do not qualify for authorship and must not be listed as authors. When used for writing, language editing, or data handling, their role should be explicitly declared in the acknowledgements section, stating the name, version, and nature of use.[4,14,17,18] If AI tools assist in data analysis or figure generation, details of the model, prompt, and date of use should be clearly mentioned in the methods section. Researchers remain fully accountable for all AI-generated content. They must keep records of the system used, including the date, verify the accuracy of references, validate concepts and interpretations, ensure that outputs and language are unbiased and correctly attributed, and ensure that independent human oversight is essential to safeguard the reliability and integrity of scientific communication.[2,19]

Current consensus and policy on disclosure of AI use

Transparency is central to ethical scholarly communication. The author must declare any AI involvement in manuscript preparation in either the acknowledgementor method section. A suitable statement example of disclosure is: the author used [Tool Name, Version] for grammar and language refinement. No AI tool was used for data generation or interpretation. Failure to disclose the use of AI within the editorial workflow adheres to privacy and ethical standards. Disclosure aligns with fundamental norms of scientific integrity, including honesty, transparency, accountability, reproducibility, and fairness.[44] Resnik and Hosseini[45] enumerated five ethical rationales for disclosing the use of AI in research. First, it ensures fair credit assignment, so that human authors are not improperly credited for work produced or significantly shaped by AI systems. Transparency about AI use also enhances accountability, allowing any issues, such as plagiarism, data fabrication, or algorithmic bias, to be traced back to the responsible individuals. Disclosure supports reproducibility by enabling other researchers to understand and replicate methods or analyses that involve AI tools. Furthermore, it establishes a clear boundary between human and machine contributions, clarifying the extent to which the work was guided by human judgment versus automated processes. Ultimately, openness about AI involvement reinforces trustworthiness, strengthening both public and academic confidence in the credibility and integrity of the scientific record.[45] They further introduced a two-part criterion for determining whether disclosure is mandatory and suggested that AI use must be disclosed if it is intentional and substantial. Intentional use refers to the researcher's deliberate application of an AI tool for a specific scientific purpose. Substantial use means the tool directly affects the researcher's evidence, analysis, or conclusions. Three practical indicators of substantial AI use proposed by them are summarized in Table 2.[45.46]

Table 2: Three practical indicators of substantial AI
Category Example Obligation
Mandate Using AI to design experiments, draft sections, translate papers, summarise or paraphrase content, analyse or visualise data, or extract systematic review data
It is mandatory to disclose the use of AI.
1. When AI tools make decisions that directly impact the research outcome. E.g., extracting data for a systematic review.
2. When an AI tool generates or synthesises content, data, or images that directly affect research outcomes, such as writing sections of a paper or creating synthetic data.
3. When an AI tool analyses genomic data, text, or radiographic images, it produces an analysis that supports findings and conclusions and affects publication content.
Must disclose
Optional Using AI for grammar checks, reference validation, brainstorming ideas, or reorganising text Author discretion
Unnecessary Using AI incidentally (eg, embedded in software, or instruments like search or sequencer engines) Not required

AI use in peer review and editorial processes

Peer reviewing is the cornerstone of scholarly publishing. It depends on several essential principles, including confidentiality, accountability, transparency, data minimization, fairness, and bias, as well as expert human judgment. In medicine and healthcare, where evidence-based accuracy determines patient outcomes, the shortage of qualified reviewers is acute. A small fraction of dedicated academics shoulders the majority of review tasks, leading to fatigue and variability in the quality of reviews.[47] AI tools are now powerful enough to reduce administrative burden, automate quality checks, and support editors in identifying suitable reviewers. Further, it assists reviewers in organizing, verifying, and critically appraising manuscripts.[48,49]AI can assist reviewers in rephrasing comments, leading to kinder and more professional reviews, thus helping clarify and improve the tone of feedback. AI has the potential to accelerate the peer review process by automating tasks such as pre-submission checks and managing large volumes of submissions, as well as quickly identifying issues related to plagiarism, statistical errors, or data inconsistencies. AI can help match manuscripts with established experts by analyzing keywords and prior publications, thereby streamlining the reviewer selection process.[50]

However, AI-generated comments may be overly generic, lack specificity, and fail to engage with the manuscript's actual content. AI-generated data/reference outputs could produce inaccurate and fabricated results, and if passed to authors without proper verification, undermine the reviewer’s credibility. Without consent, unauthorizedAI use risks compromising confidentiality and intellectual property, which may further impact the quality of reviews when confidential manuscripts are uploaded to external servers. Algorithm-based AI decisions may bias research selection and blur the boundaries between human and AI, thereby reducing originality and credibility. Over-reliance on AI may erode human judgment, leading reviewers to skip critical analysis and provide less thoughtful and detailed evaluations.[50,51]

Therefore, reviewers require training to use AI responsibly, recognize its limitations, and maintain ongoing professional development. They must understand that most AI tools operate through cloud-based or public servers, which inherently compromise confidentiality, and their unregulated use can compromise the ethical foundation of peer review. When a reviewer uploads even a portion of a manuscript to an online AI platform, it may be temporarily stored, used for model training, or expose sensitive data. Reviewers must adopt workflows that ensure data never leaves their secure environment while still harnessing AI’s cognitive strength. Practical workflow for ethical AI assisted reviewing without uploading substantial part of manuscript commenced with reading of full manuscript manually; followed by use of AI to search recent studies and summarize published literature, cross-check citations or verify contextual accuracy of references; followed by data/ method verification for statistical claims, detect p-value errors using offline AI calculators or statcheckers and thus AI can help explain complex concepts or refresh knowledge about statistics, methods or terminology mentioned in the paper without disclosing the manuscript text; then write/ draft original review and use AI to refine the grammar and tone or summaries own notes to ensure professionalism and neutrality; and final check ensuring originality to remove any AI artifacts while taking full accountability [Table 3]. In any case, reviewers and editors are discouraged from submitting any manuscript text, figure, or table to third-party AI systems without explicit consent from the corresponding author or publisher. Human oversight must remain central, with AI used to complement, not replace, the judgment of reviewers or editors.[50] In the end, if AI tools are used in any part of the review preparation, phrasing or literature clarification, reviewers should disclose this to the editor. A disclosure statement by reviewers affirming that “AI tools were used for linguistic or conceptual assistance, not for content generation or manuscript analysis, and that confidentiality was maintained throughout” can be added for transparency and accountability.

Table 3: Practical workflow for ethical AI-assisted reviewing without uploading a substantial part of the manuscript
Step Reviewer's action Ethical AI use
Initial reading Manual reading of the full-text of the manuscript No use of an AI tool
Background search Searching for similar studies using keywords and title AI assistance for a data searching and summarisation tool
Statistical data methods verification Check statistical tools and data using similar data studies Using offline AI calculators or statchekers
Drafting of review Write a review in your own words Use AI to refine tone or summarise own notes
Final checking of the review Ensure originality and remove AI artefacts. Reviewers take full accountability; Disclosure of a statement if AI is used substantially.

AI misconduct detection and publication ethics

Adele da Veiga [52] analyzed the AI-related publication unified policies of the world’s top 10 academic publishers to propose ethical standards for the use of AI in research and publishing. Using a literature-based thematic analysis, the author identified significant inconsistencies among publishers regarding the acceptable use of AI. Therefore, a coherent, transparent, and globally harmonized set of ethical guidelines is needed to distinguish the responsible use of AI-assisted tools from the more cautious use of generative AI in scholarly publishing.[52] Research integrity communities, journals, and publishers have been utilizing numerous AI tools to detect research misconduct, including plagiarism, image manipulation, and paper mill activities. While AI tools like ImaCheck, FigCheck, Papermil Alarm, Proofig, and ImageTwin can enhance research integrity, they also introduce ethical, procedural, and fairness challenges.[46,53,54]

AI screening systems are not error-free and can generate false positives, wrongly flagging legitimate work, which may lead to unsubstantiated or harmful allegations against researchers who have been vindicated. This is because an AI tool cannot determine intent, which is essential in distinguishing misconduct from honest error under established definitions by US federal policy and COPE standards. Hossenini & Resnik[46] highlighted that AI detection tools have technical limitations, such as the inability to recognize subtle image alterations or conceptual plagiarism. Furthermore, misuse of AI screening could introduce bias, especially if applied unevenly across submissions from different regions or institutions. They emphasize that any concerns identified by AI systems must undergo human verification before action is taken, ensuring that false positives do not unfairly harm researchers. Journals should rely only on validated and reliable AI tools, while staying alert to emerging forms of misconduct that may evade current detection methods. Authors flagged by these systems must be informed and allowed to respond or clarify before conclusions are drawn.[46] Moreover, AI screening should be applied consistently across all submissions to avoid regional or institutional bias. Publishers should publicly disclose their policies on how AI tools are implemented and how potential cases of misconduct are addressed.

RECOMMENDATIONS AND CONCLUSION

While AI-assisted tools, such as grammar and referencing software, are widely accepted for improving grammar, readability, and formatting, the use of generative AI tools remains contentious, particularly in the pre-writing and data analysis phases. The integration of AI into scientific publishing, particularly LLMs such as ChatGPT, is transforming the landscape of biomedical research, academic writing, peer-reviewing, and editorial decision-making. While AI enhances efficiency, consistency, and linguistic precision, it also raises profound ethical questions about biomedical research, authorship, accountability, and thus preservation of human intellect within the scientific process. Tian et al.[55] observed that while LLMs have improved efficiency in literature summation, medical education, and patient communication, they have yet to revolutionize biomedicine. Persistent challenges such as hallucinated information, algorithm bias, and privacy risks, demand human oversight, contextual interpretation, and domain-specific validation. Therefore, researchers, authors, reviewers, and editors must ensure that patient data remain confidential, that results are interpreted in context, and that algorithmic outputs are transparently verified.

Hyrciw, et al.[56] provide an ethical framework centered on integrity, transparency, validity, and accountability, along with a clear classification system for levels of AI involvement, ranging from proofreading and restructuring to drafting and potential autonomous writing. This model stresses that with increasing AI engagement comes a proportional need for human oversight, disclosure, and verification. From an author’s standpoint, this framework ensures that AI remains a supportive collaborator, enhancing thought clarity and precision without diluting originality or ethical responsibility. Ciaccaio[16] extends this discussion by identifying ethical grey zones that authors, reviewers, and editors navigate. Differentiation between acceptable assistance and unethical practices, including unacknowledged AI-paraphrasing, ghost-writing, and plagiarism, must be known. Editors mandatorily ask for explicit disclosure of AI usage mirrors the need for transparency statements, similar to conflict-of-interest or funding declarations, to protect academic integrity. This is because undisclosed AI can blur accountability, complicate peer review, and potentially undermine the credibility of published research.

Drozdz & Ladomery[49] highlighted that the centuries-old peer review system continues to face challenges of bias, inconsistency, and reviewer fatigue. AI can assist editors in screening manuscripts, detecting plagiarism, and checking references, thereby allowing human reviewers to focus on conceptual rigor and methodological soundness. However, sustainable reform requires not only technological adaptation but also recognition, mentorship, and fair reward systems for reviewers, ensuring that efficiency gains do not come at the expense of motivation or depth of analysis. The responsible integration of AI across the publication process requires collective accountability from authors, reviewers, and editors.[50,51] Clear policies and training are essential to define acceptable uses of AI, promote transparency through disclosure statements, and ensure all stakeholders understand how to interpret and manage AI-generated content. Equal attention must be given to protecting data privacy and maintaining the confidentiality of manuscripts, as the use of public AI platforms without safeguards may breach sensitive or proprietary information. All publishers and journals prohibit listing AI as an author, prohibit uploading any identifiable manuscript content, emphasizefull disclosure of AI use, and stress that human authors retain accountability for content integrity.

Ultimately, the goal of integrating AI into the scientific ecosystem is not to replace human reasoning but to redefine the synergy between human insight and machine precision. The future of scholarly publishing will depend on our ability to strike a balance where technology amplifies human wisdom without eroding the ethical and emotional foundations of scientific discovery. Ethical healthcare publishing requires that all content derived or refined using AI tools be critically evaluated by qualified professionals. As custodians of healthcare knowledge, researchers and educators must advocate responsible innovation using technology to elevate evidence-based practice without compromising the moral and social contract between healthcare providers and the community they serve.

Authors Contribution:

VKB: Conception or design; VKB, UKB: Acquisition, analysis, or interpretation of data; Drafting and revision of the work; and Final approval of the manuscript.

Ethical approval:

Institutional Review Board approval is not required.

Declaration of patient consent:

This perspective article does not involve human participants; therefore, patient consent was not required.

Conflict of interest:

There is no conflict of interest

Use of artificial intelligence (AI)-assisted technology for manuscript preparation:

Artificial intelligence (AI)-based tools were used solely for grammar correction, language refinement, and enhancement of readability. The authors take full responsibility for the content and confirm that no AI tools were used for idea generation, data analysis, result interpretation, or any other form of substantive intellectual contribution. All AI-assisted edits were carefully reviewed and approved by the authors to ensure the accuracy and integrity of the final manuscript.

Financial support and sponsorship: Nil

References

  1. , . What is artificial intelligence (AI)? . IBM Think. [Last accessed on 2025 Oct 13]. Available from: https://www.ibm.com/think/topics/artificial-intelligence
    [Google Scholar]
  2. . Ethical use of artificial intelligence for scientific writing: current trends. J Hum Lact. 2024;40(2):211-5. doi: 10.1177/08903344241235160
    [CrossRef] [PubMed] [Google Scholar]
  3. , , , , , . Brief history of artificial intelligence. Neuroimaging Clin N Am. 2020;30(4):393-9. doi: 10.1016/j.nic.2020.07.004.
    [CrossRef] [PubMed] [Google Scholar]
  4. . Artificial Intelligence (AI) in Decision Making: A Discussion Document. 2021
    [CrossRef] [Google Scholar]
  5. , , , , , , , , , , . Use of AI in family medicine publications: a joint editorial from journal editors. PRiMER. 2025;9:3. doi: 10.1370/afm.240575
    [CrossRef] [PubMed] [Google Scholar]
  6. . How ChatGPT and other AI tools could disrupt scientific publishing. Nature. 2023;622(7982):234-6. doi: 10.1038/d41586-023-03144-w.
    [CrossRef] [PubMed] [Google Scholar]
  7. , . Change to policy on the use of generative AI and large language models. . Science [Internet]. [Last accessed on 2025 Oct 13]. Available from: https://www.science.org/content/blog-post/change-policy-use-generative-ai-and-large-language-models
    [Google Scholar]
  8. , , . Guidance for authors, peer reviewers, and editors on use of AI, language models, and chatbots. JAMA. 2023;330(8):702-3. doi:10.1001/jama.2023.12500
    [CrossRef] [PubMed] [Google Scholar]
  9. . Generative artificial intelligence and scientific publishing: urgent questions, difficult answers. Lancet. 2024;403(10432):1118-20. doi: 10.1016/S0140-6736(24)00416-1.
    [CrossRef] [PubMed] [Google Scholar]
  10. , , . Quality and safety of artificial intelligence-generated health information. BMJ. 2024;384:q596. doi: 10.1136/bmj.q596.
    [CrossRef] [PubMed] [Google Scholar]
  11. , , . Large language models and the degradation of the medical record. N Engl J Med. 2024;391(17):1561-4. doi:10.1056/NEJMp2405999
    [CrossRef] [PubMed] [Google Scholar]
  12. . Editorial policies [Internet] In: Use of AI-assisted technologies. [Last accessed on 2025 Oct 13]. Available from: https://www.nejm.org/about-nejm/editorial-policies
    [Google Scholar]
  13. , , , , , , . Artificial intelligence in peer review. JAMA 2025 Aug 28 doi:10.1001/jama.2025.15827
    [CrossRef] [Google Scholar]
  14. . Paper mills research. COPE Research Report 2024
    [CrossRef] [Google Scholar]
  15. , , , , , , et al. Editors' statement on the responsible use of generative AI technologies in scholarly journal publishing. Med Health Care Philos. 2023;26:499-503. doi:10.1007/ s11019-023-10176-6
    [CrossRef] [PubMed] [Google Scholar]
  16. . Use of artificial intelligence in scientific paper writing. Informatics Med Unlocked. 2023;41:101253. doi:10.1016/j.imu.2023.101253
    [CrossRef] [Google Scholar]
  17. Defining the role of authors and contributors [Internet] [Last accessed on 2025 Oct 13]. Available from: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html
    [Google Scholar]
  18. . Chatbots, generative AI, and scholarly manuscripts .
    [CrossRef] [Google Scholar]
  19. . Guidelines for the use of generative artificial intelligence tools for biomedical journal authors and reviewers. Arthroscopy. 2023;40(3):E1-E4. doi:10.1016/j.arthro.2023.10.037
    [CrossRef] [Google Scholar]
  20. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons about the processing of personal data and on the free movement of such data (General Data Protection Regulation) Off J Eur Union. 2016;L 119:1-88. [Last accessed on 2025 Oct 13]. Available from: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679
    [Google Scholar]
  21. . (Act No. 22 of 2023) Enacted 11 Aug 2023. Ministry of Electronics and Information Technology, Government of India. [Last accessed on 2025 Oct 13]. Available from: https://www.meity.gov.in/static/uploads/2024/06/2bf1f0e9f04e6fb4f8fef35e82c42aa5.pdf
    [Google Scholar]
  22. . Paper mills research. Research report from COPE & STM 2022 doi:10.24318/jtbG8IHL
    [CrossRef] [Google Scholar]
  23. , , , . Deepfakes: a new threat to image fabrication in scientific publications? Patterns (N Y). 2022;3(5):100509. doi:10.1016/j.patter.2022.100509
    [CrossRef] [PubMed] [Google Scholar]
  24. . ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare (Basel). 2023;11(6):887. doi: 10.3390/healthcare11060887.
    [CrossRef] [PubMed] [Google Scholar]
  25. , . Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus. 2023;15(2):e35179. doi:10.7759/cureus.35179
    [CrossRef] [Google Scholar]
  26. . Editorial: generative artificial intelligence as a plagiarism problem. Biol Psychol. 2023;181:108621. doi:10.1016/j.biopsycho.2023.108621
    [CrossRef] [PubMed] [Google Scholar]
  27. , , , , . ChatGPT: five priorities for research. Nature. 2024;614(7947):224-6. doi: 10.1038/d41586-023-00288-7.
    [CrossRef] [PubMed] [Google Scholar]
  28. , . Utilizing ChatGPT in clinical research related to anesthesiology: a comprehensive review of opportunities and limitations. Anesth Pain Med (Seoul). 2023;18(3):244-51. doi:10.17085/apm.23056
    [CrossRef] [PubMed] [Google Scholar]
  29. , , , , . Digital divide in AI-powered education: challenges and solutions for equitable learning [Internet] J Inform Systems Eng Manage. 2025;10(21s) [Last accessed on 2025 Oct 13]. Available from: https://jisemjournal.com/index.php/journal/article/download/3327/1439/5448
    [CrossRef] [Google Scholar]
  30. , , , , , , et al. The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS Nexus. 2024;3(6):191. doi: 10.1093/pnasnexus/pgae191
    [CrossRef] [PubMed] [Google Scholar]
  31. , , , , , , et al. The potential and concerns of using AI in scientific research: ChatGPT performance evaluation. JMIR Med Educ. 2023;9:e47049. doi: 10.2196/47049.
    [CrossRef] [PubMed] [Google Scholar]
  32. . The impact of artificial intelligence tools on academic writing instruction in higher education: a systematic review. Arab World English J (AWEJ) Spec Issue ChatGPT 2024:26-55. doi:10.24093/awej/ChatGPT.2
    [CrossRef] [Google Scholar]
  33. . AI linked to explosion of low-quality biomedical research papers. Nature. 2025;641(8065):1080-1. doi: 10.1038/d41586-025-01592-0
    [CrossRef] [PubMed] [Google Scholar]
  34. , , , , , . Explosion of formulaic research articles, including inappropriate study designs and false discoveries, based on the NHANES US national health database. PLoS Biol. 2025;23(5):e3003152. doi:10.1371/journal.pbio.3003152
    [CrossRef] [PubMed] [Google Scholar]
  35. , . Using artificial intelligence in academic writing and research: an essential productivity tool. Digit Health. 2024;10:20552076241225887. doi: 10.1016/j.cmpbup.2024.100145
    [CrossRef] [Google Scholar]
  36. . Exploring the role of artificial intelligence in enhancing academic performance: a case study of ChatGPT [Internet] Rochester (NY): SSRN; 2022 [Last accessed on 2025 Oct 13]
    [CrossRef] [Google Scholar]
  37. , , , , , , . Natural Language Processing for Literature Search in Vascular Surgery: A Pilot Study Testing an Artificial Intelligence Based Application. EJVES Vasc Forum. 2023;60:48-52. doi: 10.1016/j.ejvsvf.2023.09.004.
    [CrossRef] [PubMed] [Google Scholar]
  38. , , , , . Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora's box has been opened. J Med Internet Res. 2023;25:e46924. doi: 10.2196/46924.
    [CrossRef] [PubMed] [Google Scholar]
  39. . The impact of artificial intelligence (AI) programs on writing scientific research. Ann Biomed Eng. 2023;51(3):459-60. doi: 10.1007/s10439-023-03140-1.
    [CrossRef] [PubMed] [Google Scholar]
  40. , , . Artificial intelligence and the conduct of literature reviews. J Inform Technol. 2022;37(2):209-26. doi: 10.1177/02683962211048201
    [CrossRef] [Google Scholar]
  41. , , . On the use of AI-based tools like ChatGPT to support management research. Eur J Innov Manag. 2023;26(7):233-41. doi: 10.1108/EJIM-02-2023-0156
    [CrossRef] [Google Scholar]
  42. , , , . Artificial intelligence in academic writing: a paradigm-shifting technological advance. Nat Rev Urol. 2023;20(6):327-8. doi: 10.1038/s41585-023-00746-x.
    [CrossRef] [PubMed] [Google Scholar]
  43. , , , , . AI-powered chatbots in medical education: potential applications and implications. Cureus. 2023;15(8):e43271. doi: 10.7759/cureus.43271
    [CrossRef] [Google Scholar]
  44. . Defining the boundaries of AI use in scientific writing: a comparative review of editorial policies. J Korean Med Sci. 2025;40(23):e187. doi: 10.3346/jkms.2025.40.e187.
    [CrossRef] [PubMed] [Google Scholar]
  45. , . Disclosing artificial intelligence use in scientific research and publication: when should disclosure be mandatory, optional, or unnecessary? Account Res. 2025;32(1):1-15. doi:10.1080/08989621.2024.2459963
    [CrossRef] [PubMed] [Google Scholar]
  46. , . Guidance needed for using artificial intelligence to screen journal submissions for misconduct. Res Ethics. 2025;21(1):24-31. doi: 10.1177/17470161241254052.
    [CrossRef] [PubMed] [Google Scholar]
  47. . The challenge of reviewers scarcity in academic journals: payment as a viable solution. Einstein (Sao Paulo). 2024;22:eED1194. doi:10.31744/einstein_journal/2024ED1194
    [CrossRef] [PubMed] [Google Scholar]
  48. , , , , . The growing demand for peer review: current challenges and potential reforms. Br J Biomed Sci. 2025;82:14930. doi:10.3389/ bjbs.2025.14930
    [CrossRef] [PubMed] [Google Scholar]
  49. , . The peer review process: past, present, and future. Br J Biomed Sci. 2024;81:12054. doi:10.3389/bjbs.2024.12054
    [CrossRef] [PubMed] [Google Scholar]
  50. , , , , . Artificial intelligence in peer review: enhancing efficiency while preserving integrity. J Korean Med Sci. 2025;40(7):e92. doi:10.3346/jkms.2025.40.e92
    [CrossRef] [PubMed] [Google Scholar]
  51. . Benefits and challenges of using AI for peer review: a study on researchers' perceptions. Ser Libr. 2024;85(5-6):144-54. doi:10.1080/0361526X.2024.2428377
    [CrossRef] [Google Scholar]
  52. . Ethical guidelines for the use of generative artificial intelligence and artificial intelligence-assisted tools in scholarly publishing: a thematic analysis. Sci Ed. 2025;12(1):28-34.
    [CrossRef] [Google Scholar]
  53. . 'Papermill alarm' software flags potentially fake papers. Nature 2022 doi:10.1038/d41586-022-02997-x
    [CrossRef] [Google Scholar]
  54. . AI beats human sleuth at finding problematic images in research papers. Nature. 2023;622(7982):230. doi: 10.1038/d41586-023-02920-y
    [CrossRef] [PubMed] [Google Scholar]
  55. , , , , , , et al. Opportunities and challenges for ChatGPT and large language models in biomedicine and health. Brief Bioinform. 2024;25(1):bbad493. doi:10.1093/bib/bbad493
    [CrossRef] [PubMed] [Google Scholar]
  56. , , . Guiding principles and proposed classification system for the responsible adoption of artificial intelligence in scientific writing in medicine. Front ArtifIntell. 2023;6:1283353. doi:10.3389/frai.2023.1283353
    [CrossRef] [PubMed] [Google Scholar]
Show Sections