Back to Journals » Nature and Science of Sleep » Volume 15

Balancing Innovation and Integrity: The Role of AI in Research and Scientific Writing

Authors BaHammam AS 

Received 19 December 2023

Accepted for publication 21 December 2023

Published 29 December 2023 Volume 2023:15 Pages 1153—1156

DOI https://doi.org/10.2147/NSS.S455765

Checked for plagiarism Yes

Editor who approved publication: Dr Sarah L Appleton



Ahmed S BaHammam1– 3

1Editor-in-Chief Nature and Science of Sleep; 2Department of Medicine, University Sleep Disorders Center and Pulmonary Service, King Saud University, Riyadh, Saudi Arabia; 3King Saud University Medical City, Riyadh, Saudi Arabia

Correspondence: Ahmed S BaHammam, Professor of Medicine, University Sleep Disorders Center, Department of Medicine, College of Medicine, King Saud University, Box 225503, Riyadh, 11324, Saudi Arabia, Tel +966-11-467-9495, Fax +966-11-467-9179, Email [email protected]

Some people call this artificial intelligence, but the reality is this technology will enhance us. So instead of artificial intelligence, I think we’ll augment our intelligence.

– Ginni Rometty

In today’s scientific landscape, artificial intelligence (AI) is revolutionizing research methodologies and scientific writing, reshaping how we conduct and disseminate research. As AI’s presence grows, so do questions surrounding ethics, authenticity, and the integrity of scientific publications. The increasing use of AI tools, such as large language models (LLMs) like Chat Generative Pre-Trained Transformer (ChatGPT), Google Bard, and Bing AI, in research publications has raised concerns and sparked discussions within the research and academic communities.1 While AI and LLMs offer potential benefits, such as improved efficiency and transformative solutions, they also present challenges related to ethical considerations, bias, fake publications, and malicious use.2

AI has the potential to enhance various aspects of research, including data processing, task automation, and personalized experiences.1 However, AI usage in research and scientific writing can pose risks such as bias reinforcement, data privacy concerns, perpetuating data inaccuracies, and the potential for reduced critical thinking due to overreliance.3 Therefore, the development of guidelines for using AI in research and scientific writing is crucial to ensure this technology’s responsible and ethical application.

This editorial, published in Nature and Science of Sleep, primarily aims to enhance awareness of the evolving role of AI in research and scientific writing, emphasizing both its potential advantages and ethical challenges. By promoting responsible AI use, advocating for ethical guidelines, and engaging stakeholders, we strive to empower authors, reviewers, and the broader research community to navigate the dynamic landscape of AI in scientific writing while upholding the highest standards of integrity and credibility. Furthermore, we emphasize the critical need for the development of international guidelines that guide the responsible use of AI and LLMs in research and scientific writing.

AI’s Potential Benefits and Challenges

AI holds the promise to profoundly transform research and education through various key advantages. Firstly, it has the capability to process vast amounts of data swiftly and efficiently, empowering researchers to navigate through sophisticated datasets and draw out meaningful insights.1 Additionally, the automation features of AI streamline tasks like formatting and citation, freeing up substantial time and energy for researchers, which can then be redirected towards more complex and innovative work.3,4 Lastly, AI can curate personalized learning journeys for students, tailoring the experience to their unique needs and learning preferences.5

Nevertheless, while promising, AI systems have notable drawbacks, especially in health and medical research. These systems can amplify and perpetuate biases present in the training data, leading to skewed predictions and potentially harmful implications for patient care.6,7 This is concerning, as biases in AI models can emerge during various stages, from data collection to model evaluation.7 Such biases can result in inaccurate findings that might influence clinical guidelines or medical interventions, and recent studies have underscored these concerns, suggesting that these biases can lead to significant health disparities.8 Another emerging challenge is the misuse of AI by paper mills to produce fraudulent scientific papers. This abuse of AI technology has led to an increase in the volume of fake publications, undermining the credibility of scientific research.9 These paper mills employ sophisticated AI tools to generate texts and images that are increasingly difficult to distinguish from authentic research, posing a significant threat to the integrity of scientific literature.9

Moreover, AI models, particularly those based on deep learning, are often viewed as “black boxes”; their complex inner mechanisms can be elusive, making results challenging to interpret, especially for those not versed in the domain.10 Another concern, especially in academic circles, is the reliability of AI-generated text. Platforms like ChatGPT may produce content with inaccuracies or plagiarization, jeopardizing work credibility, especially if it includes false references or citation errors.11–13 Additionally, although AI instruments can aid in task automation and streamline the writing process, it is important to acknowledge that these tools cannot replace the unique creativity and insight inherent in humans. AI operates by analyzing existing data and recognizing patterns; however, unlike humans, it lacks the ability to engage in unconventional thinking or to forge innovative links.

Authorship and Attribution

In January 2023, Nature reported on the controversial issue of ChatGPT, an AI tool, being listed as an author on scientific and health research papers. The report highlighted that at least four articles, including two preprints and two published articles, credited ChatGPT as a co-author.14 However, this practice was met with disapproval from many scientists and led to discussions about the ethical implications and the validity of AI tools being credited as authors.15 Therefore, the currently accepted practice indicates that non-human AI and language models are not eligible for authorship, as they lack the ability to take responsibility for the work, provide intellectual contributions, or approve the final version of the manuscript; this consensus is shared among journals and research communities.16

Moreover, work generated by AI may not be subject to the rules of copyright; recently, the United States District Court for the District of Columbia confirmed that artwork generated autonomously by AI alone is not entitled to protection under the Copyright Act. This ruling is significant as it sets a precedent for those seeking to secure ownership and copyright protection for AI-generated content.17 Applying these legal principles to academic work suggests that AI-generated content, even when significantly modified by human authors, might not receive the same copyright protection as work solely created by humans. This necessitates a reassessment of how AI contributions are recognized in scientific publications and calls for definitive guidelines on AI’s role in academic content creation. Such guidelines should safeguard the distinct recognition and copyright of human authorship. This ruling highlights the changing legal context of AI in academia, underscoring the need for collaboration among publishers, researchers, and legal professionals to address these emerging issues.

Journals’ policies on using generative AI for scientific writing differ. Some publishers prohibit AI use without explicit editor permission, while others mandate detailed disclosure in the manuscript. Regardless of these variances, there is a consensus on the necessity for transparency and the author’s responsibility to uphold content integrity. In academia’s early AI integration phase, there is a potential stigma around AI-assisted manuscripts, even those used merely for enhancing grammar and clarity. Such manuscripts might be viewed skeptically by reviewers and readers, although there is no definitive evidence on how declaring AI assistance impacts paper acceptance. Contrary to best practices, this uncertainty may lead some authors to omit AI use disclosure. To address this potential stigma, enhancing education for all stakeholders, including authors, reviewers, and editors, is imperative. Implementing comprehensive workshops, seminars, and detailed guidelines can inform these groups about the ethical use of AI tools in academic writing, highlighting their role in augmenting human intellect and creativity rather than replacing it. Dove Press, in its author instructions, explicitly states that any AI tool assistance, including large language models for content generation, must be acknowledged.18 Authors bear full responsibility for their article’s validity, originality, and integrity, and are expected to use AI tools responsibly, aligning with the publisher’s authorship ethics and editorial policies.

Detecting AI Text

While publishers and editors have established various software tools for detecting similarities and plagiarism, identifying AI-generated text remains a complex challenge. The growing prevalence of AI in content creation blurs the line between human and machine authorship, raising concerns about authorship authenticity and transparency.12,19,20 In response, various stakeholders in academia and publishing, such as publishers, reviewers, and editors, are increasingly turning to AI content detection tools. These tools aim to differentiate between human-written and AI-generated texts.12,19 However, the effectiveness of these tools is not uniform, and ongoing research is essential to enhance their accuracy and reliability.21

Towards International Guidelines

As AI solutions in scientific writing rapidly evolve, there is a growing need for international guidelines that address transparency, reproducibility, and ethics in AI-assisted research and writing. These guidelines should consider both the benefits and challenges of AI and LLMs in research and education.22,23 Therefore, an international statement guiding the responsible use of these technologies is urgently needed. This statement should focus on transparency, accountability, ongoing research, and risk mitigation, incorporating aspects like monitoring, evaluation, user education, and awareness. Transparency and disclosure are crucial when using AI-generated content in scientific writing.23 Researchers must disclose their use of AI tools, ensuring human expertise guides the accuracy, coherence, and credibility of the content.3

To formulate these guidelines effectively, a consortium should be established, encompassing diverse stakeholders such as academic institutions, AI developers, legal experts, publishers, and representatives from ethics and data privacy fields. This collaborative approach will ensure comprehensive and pragmatic guidelines. The development of these guidelines should also address biases, promote fairness in AI-generated content, and consider data privacy and security, especially as AI tools often require access to large datasets, potentially containing sensitive information.24

Furthermore, establishing ethical guidelines for AI use in research and scientific writing is vital to maintain research integrity and ensure that scientific literature continues to be a rigorous, accurate, and innovative knowledge source.25 These guidelines should ensure that AI acts as an enhancement to productivity, rather than a replacement for human effort.

In creating these guidelines, it is essential to include perspectives from diverse groups.1 This includes non-native English speakers who may use Language Models and AI to refine their writing, and individuals with special needs who could benefit from AI assistance. However, developing these comprehensive guidelines will likely be a time-intensive process.

In summary, while AI and LLMs hold promise in research, they also bring forth challenges that demand our attention. By promoting open dialogue and implementing robust guidelines, we can ensure the ethical and responsible integration of AI and LLMs, thereby maximizing their potential benefits while mitigating risks. This editorial serves as a call to action for the authors, reviewers, and readers of Nature and Science of Sleep to prioritize responsible AI use and actively engage in ongoing discussions and policy development in the field of scientific writing.

Disclosure

The author reports no conflicts of interest in this work.

References

1. Bahammam AS, Trabelsi K, Pandi-Perumal SR, Jahrami H. Adapting to the Impact of Artificial Intelligence in Scientific Writing: balancing Benefits and Drawbacks while Developing Policies and Regulations. J Nature Sci Med. 2023;6(3):152–158.

2. Hammad M. The Impact of Artificial Intelligence (AI) Programs on Writing Scientific Research. Ann Biomed Eng 2023;51(3):459–460. doi:10.1007/s10439-023-03140-1

3. Salvagno M, Taccone FS, Gerli AG. Can artificial intelligence help for scientific writing? Critical Care. 2023;27(1):75. doi:10.1186/s13054-023-04380-2

4. Pisica AI, Edu T, Zaharia RM, et al. Implementing Artificial Intelligence in Higher Education: pros and Cons from the Perspectives of Academics. Societies. 2023;13(5):118. doi:10.3390/soc13050118

5. U.S. Department of Education, O.o.E.T. Artificial Intelligence and Future of Teaching and Learning: insights and Recommendations; 2023. 1–64. Avaiable fron: https://tech.ed.gov/. Accessed December 12, 2023.

6. Celi LA, Cellini J, Charpignon M-L, et al. Sources of bias in artificial intelligence that perpetuate healthcare disparities-A global review. PLOS Digit Health. 2022;1(3):e0000022. doi:10.1371/journal.pdig.0000022

7. Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digital Med. 2023;6(1):113. doi:10.1038/s41746-023-00858-z

8. Nazer LH, Zatarah R, Waldrip S, et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digit Health. 2023;2(6):e0000278. doi:10.1371/journal.pdig.0000278

9. Liverpool L. AI intensifies fight against ‘paper mills’ that churn out fake research. Nature. 2023;618(7964):222–223. doi:10.1038/d41586-023-01780-w

10. Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: a Review of Machine Learning Interpretability Methods. Entropy. 2021;23(1):18. doi:10.3390/e23010018

11. Anderson N, Belavy DL, Perle SM, et al. AI did not write this manuscript, or did it? Can we trick the AI text detector into generated texts? The potential future of ChatGPT and AI in Sports & Exercise Medicine manuscript generation. BMJ Open Sport Exerc Med. 2023;9(1):e001568. doi:10.1136/bmjsem-2023-001568

12. Elali FR, Rachid LN. AI-generated research paper fabrication and plagiarism in the scientific community. Patterns. 2023;4(3):100706. doi:10.1016/j.patter.2023.100706

13. Walters WH, Wilder EI. Fabrication and errors in the bibliographic citations generated by ChatGPT. Sci Rep. 2023;13(1):14045. doi:10.1038/s41598-023-41032-5

14. Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023;613(7945):620–621. doi:10.1038/d41586-023-00107-z

15. Editorials N. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023;613(7945):612. doi:10.1038/d41586-023-00191-1

16. COPE. Authorship and AI tools: COPE position statement. 2023. Available from: https://publicationethics.org/cope-position-statements/ai-author#:~:text=COPE%20position%20statement&text=COPE%20joins%20organisations%2C%20such%20as,responsibility%20for%20the%20submitted%20work. Accessed December 21, 2023.

17. K&L Gates. Federal Court Rules Work Generated by Artificial Intelligence Alone Is Not Eligible for Copyright Protection. 2023. Available from: https://www.klgates.com/Federal-Court-Rules-Work-Generated-by-Artificial-Intelligence-Alone-Is-Not-Eligible-for-Copyright-Protection-8-30-2023. Accessed December 21, 2023.

18. Dove Press. Editorial Policies/Authorship. 2023. Available from: https://www.dovepress.com/editorial-policies/authorship#:~:text=Any%20assistance%20from%20AI%20tools,integrity%20of%20their%20article%20content. Accessed December 21, 2023.

19. Desaire H, Chua AE, Isom M, et al. Distinguishing academic science writing from humans or ChatGPT with over 99% accuracy using off-the-shelf machine learning tools. Cell Rep Phys Sci. 2023;4(6).

20. Hosseini M, Rasmussen L, Resnik Science journals set new authorship guidelines for AI-generated text. Environmental Factor; 2023. Available from: https://factor.niehs.nih.gov/2023/3/feature/2-artificial-intelligence-ethics. Accessed December 15, 2023.

21. Elkhatat AM, Elsaid K, Almeer S. Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. Int J Educ Integrity. 2023;19(1):17. doi:10.1007/s40979-023-00140-5

22. Fleischmann C, Aritz J, Writing the rules in AI-assisted writing. Nature Mach Intell. 2023;5(5):469. doi:10.1038/s42256-023-00678-6

23. Lovejoy CA, Arora A, Buch V, et al. Key considerations for the use of artificial intelligence in healthcare and clinical research. Future Healthc J. 2022;9(1):75–78. doi:10.7861/fhj.2021-0128

24. Oniani D, Hilsman J, Peng Y, et al. Adopting and expanding ethical principles for generative artificial intelligence from military to healthcare. NPJ Digital Med. 2023;6(1):225. doi:10.1038/s41746-023-00965-x

25. Smeds MR, Mendes B, O’Banion LA, et al. Exploring the pros and cons of using artificial intelligence in manuscript preparation for scientific journals. J Vasc Surg Cases Innov Tech. 2023;9(2):101163. doi:10.1016/j.jvscit.2023.101163

Creative Commons License © 2023 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.