Back to Journals » Advances in Medical Education and Practice » Volume 14

Reading Between the Lines: Navigating Nuance in Medical Literature to Optimize Clinical Decision-Making and Health Care Outcomes

Authors Nelinson D , Ko L, Bass BG 

Received 26 June 2023

Accepted for publication 12 October 2023

Published 19 October 2023 Volume 2023:14 Pages 1167—1176

DOI https://doi.org/10.2147/AMEP.S427663

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 4

Editor who approved publication: Dr Md Anwarul Azim Majumder



Donald Nelinson,1,* Lois Ko,2,* Brian G Bass3,*

1Director, US Diabetes/Transplant/Digital Health Publication Lead, General Medicines US Medical, Sanofi, Bridgewater, NJ, USA; 2Post-Doctoral Fellow, US Diabetes, General Medicines, US Medical, Sanofi, Bridgewater, NJ, USA; 3President, Bass Global, Inc, Fort Myers, FL, USA

*These authors contributed equally to this work

Correspondence: Donald Nelinson, General Medicines US Medical, Sanofi, 1084 Cyrus Road, Whitingham, VT, 05361, USA, Tel +1 201-323-5327, Email [email protected]

Abstract: Research-based articles published in medical journals are key to communicating the results of clinical trials, systematic reviews, and meta-analyses. But there are challenges inherent in the communication process. While clinicians rely on the information they read in medical journals to help guide clinical decision-making, most are overwhelmed by the amount of information being published and many receive only limited training on how to critically assess what they read. This can result in suboptimal clinical decision-making, leading to inefficient use of health care resources, avoidance of interventions that may be indicated and useful, or use of interventions that do more harm than good. A literature search of PubMed® was conducted to answer the question, what are the challenges affecting the interpretation of clinical trial results reported in the medical literature that may adversely affect clinical decision making and patient outcomes and how can those challenges be overcome? Results of this review indicate it remains challenging for readers to fully appreciate the nuances that affect the accuracy, utility, and applicability of reported data, and opportunities exist for future continuing professional development interventions to address this challenge by giving health care professionals the knowledge and skills to critically evaluate and interpret literature-based information. The objective of this article is to assist new and aspiring clinicians as well as experienced practitioners in critically assessing the medical literature so they can be informed about the latest medical advances in their areas of specialty and interest and confidently and appropriately integrate this knowledge into clinical practice. This article aspires to be a tool for residency and fellowship program directors, clerkship faculty, mentors, and other HCPs engaged in clinical education.

Keywords: continuing professional development, CPD, evaluation, critical assessment, clinical trials, graduate medical education

Introduction

The aim of clinical research is to design and implement ethical studies from which valid and meaningful results can be derived and applied to clinical practice for the purpose of increasing medical knowledge, competence, and performance, and improving health outcomes.1–4 Research-based articles published in medical journals are key to reporting these results,5 but there are challenges inherent in the communication process.

While 95% of physicians want to learn about new trials, treatments, and procedures that can improve clinical outcomes, 68% report feeling overwhelmed by the amount of information they need to access to stay up to date.6 Current approaches to education in evidence-based medicine may contribute to the communication challenge.7

In 2016, Caverly et al reported that many third- and fourth-year medical students; internal medicine residents, internists, and faculty; and clinician-researchers with evidence-based medicine expertise, were unable to correctly rank the clinical importance of reported trial outcomes in terms of the “proof” they provided that “a new drug might help people”.8 Their inability to do so could lead to suboptimal clinical decision-making that might potentially lead to inefficient use of health care resources, avoidance of interventions that may be indicated and useful, or worse, use of interventions that do more harm than good.8

One possible explanation for this disconnect may be that education in evidence-based medicine does not prepare physicians adequately to make the best decisions for their patients based on the evidence they read.7 Researchers have found that clinicians and trainees have low to moderate understanding of important clinical trial data concepts.9 Studies that report surrogate endpoints, or composite outcomes that include multiple endpoints, especially if some of those endpoints are surrogates, can also be misleading.8 Only some family medicine clinicians report receiving formal training on clinical trial data during graduate school, medical school, residency, or fellowship.9 Failure to recognize the limitations of data has been shown to lead to misinterpretations of the clinical utility of that data even among specialists.10

We conducted a literature search to answer the question, what are the challenges affecting the interpretation of clinical trial results reported in the medical literature that may adversely affect clinical decision making and patient outcomes and how can those challenges be overcome? The objective of this article is to assist new and aspiring clinicians as well as experienced practitioners in critically assessing the medical literature so they can be informed about the latest medical advances in their areas of specialty and interest and confidently and appropriately integrate this knowledge into clinical practice to optimize health care outcomes for their patients. This article aspires to be a tool for residency and fellowship program directors, clerkship faculty, mentors, and other HCPs engaged in clinical education.

Methodology

A literature search of PubMed® was conducted without limitation of publication date to identify empirical studies (preferred) and review articles on the role and influence of the medical literature in clinical decision making and the challenges and limitations of various study design and reporting techniques. Additional literature searches were conducted as necessary to more deeply explore specific topics revealed by the initial searches. The resulting articles were reviewed and synthesized by the authors to identify the challenges in assessing published clinical trial results and how these challenges can be effectively met.

Opportunities for Education

Understand the Utility and Limitations of the Various Methods by Which Medical Knowledge is Disseminated Outside the Classroom

It is difficult for HCPs to keep up with the ever-increasing volume of information that is available,11–14 but that problem is not new.15,16 In 1954, practitioners were having difficulty keeping up with ~400 English language medical journals.15 Today, the National Library of Medicine lists ~300,000 publications in PubMed,17 and between January 2000 and January 2022, MEDLINE cited an average of >300,000 newly published articles annually in the United States alone.18

Medical publishers are seeking new ways to make communicating information to HCPs more accessible, digestible, and memorable.11,19–24 Innovations such as plain language or lay summaries, video/graphic/animated abstracts, talking-head and procedural videos, author interviews, audio slides, infographics, webinars, and podcasts are being incorporated into publishers’ offerings to aid in the visualization and interpretation of data.19–23

HCPs also get information from social media channels including Facebook, Twitter, Instagram, YouTube, and podcasts. Social media deliver messages in sound bites that are easy for audiences to digest.25 But sound bites cannot provide the depth of information necessary to promote critical evaluation and foster the deeper understanding necessary for clinical decision-making. Social media are also infamous for spreading misleading and false information.25 With the emergence of CME-certified tweetorials, the need for cautious consideration is amplified.26

Digital enhancements are just that—enhancements. The journal article is still the foundational published resource upon which the enhancements are constructed, and where readers will find the details they need to assess whether the study results can, and should, support sound clinical decision-making.13,16,27

Know How to Critically Evaluate and Interpret Literature-Based Information

Following is a guide to help HCPs in training and in clinical practice to strategically select the most pertinent articles for optimal clinical decision-making, and to derive more value from those articles. For educators, the following information identifies areas in which more knowledge is needed to empower HCPs to critically evaluate what they read and reap the many benefits the medical literature has to offer.

Choose Appropriate Resources

Hundreds of thousands of citations are added to the library of medical literature each year.18 This is where selectivity comes in. It is not difficult to find tips on how to create strategies and systems to tame the jungle of medical literature.11–14,28,29 Distillation services that scan for topics of interest and provide alerts or daily, weekly, or biweekly summaries are often recommended.11,14,28,29 However, this may reduce the likelihood that the reader will be motivated to critically consider the sources of information contained in those summaries.

We recommend the following when choosing appropriate sources from which to derive information that will be synthesized into clinical recommendations:

● Identify the leading journals in the field of interest

Identifying the top journals in a particular field of interest is not always obvious. This is where knowing impact factors, acceptance rates, altmetrics, and the EMPIRE Index, and recommendations, can help.

 ○ Impact factor is the measure of the average number of times articles in a journal have been cited by other authors.30 While it may correlate with journal reliability or prestige, and suggest the journal is of high quality, impact factor is not a measure of quality. It is also not necessarily a measure of a journal’s popularity with readers. Keep these limitations in mind.

 ○ Acceptance rate is most commonly defined as the measure of the number of manuscripts accepted by a journal divided by the number of manuscripts submitted to that journal.31 It can serve as a proxy for the perceived prestige and demand for a particular journal compared to its availability.31 However, acceptance rate does not identify the reasons manuscripts have been rejected, which could have more to do with a manuscript’s applicability to the journal’s editorial mission than with quality or scientific rigor.31 Consider as well that the most prestigious journal in a given field may have a low acceptance rate simply due to the desire of authors to submit their manuscripts.

 ○ Altmetrics are a diverse set of alternative metrics and qualitative data that complement traditional citation-based metrics.32 Altmetric indicators include record of attention (the number of people exposed to and engaged with a scholarly output), measure of dissemination (where a piece of research is being discussed in scholarly and public venues such as in the news, social media, and blogs), and indicator of influence and impact (where and how a piece of research is affecting a field of study, public health, or society).32

 ○ EMPIRE (EMpirical Publication Impact and Reach Evaluation) Index is a value-based, multicomponent metric framework for measuring the impact of medical publications.33 It comprises three component scores incorporating related altmetrics to indicate social, scholarly, and societal engagement with a publication.33

 ○ Recommendations from a colleague, peer, or mentor on what journal(s) they find most useful can be helpful and provide appreciable savings.

● Confirm Retracted Articles

A recent cross-sectional study found 46 retracted articles on COVID-19; most of which had been published in scientific journals and more than half of which remained available as original unmarked documents despite their retraction.34 Retractions attempt to remove false or erroneous information from the medical literature, but they do little to erase their impact. Over time, retracted articles may be cited more often after they have been retracted than before their retraction.35 Dissemination of false or misleading information undermines confidence in the scientific community and can have potentially dangerous implications.34–36 Readers must check regularly with the publishers of articles they have read to learn whether an article has been retracted. A faster and easier way is to subscribe to the Retraction Watch database. This searchable database currently includes more than 38,000 retracted articles.37 Subscribing to this database would enable HCPs to avoid retracted articles that might possibly misguide their clinical decision-making.

● Avoid Paper Mills and Predatory Journals and Publishers

Retracted articles often originate from so-called paper mills.36 Paper mills are unscrupulous companies involved in large-scale production and publication of scientifically questionable articles and unethical practices.36,38 They sell authorships to researchers, academics, and students; fabricate databases; and falsify peer reviews.36 A research report on paper mills from the Committee on Publication Ethics (COPE) and the Association of Scientific, Technical and Medical Publishers (STM) concluded that submission of suspected fake research papers is a growing problem and provides practical help to HCPs in determining whether an article may have been generated by a paper mill.39

Predatory journals and publishers have been defined as “…entities that prioritize self-interest at the expense of scholarship and are characterized by false or misleading information, deviation from best editorial and publication practices, a lack of transparency, and/or the use of aggressive and indiscriminate solicitation practices”.40 They do a great disservice to the scientific community, to HCPs who may rely and make clinical decisions based on the information they provide, and the public. Beall’s List of potential predatory journals and publishers is a valuable tool for HCPs to identify articles with questionable content due to predatory publishing practices.41

● Understand the Differences and Similarities of Traditional and Open-Access Publishing

In the traditional model, a publisher’s revenue is generated by subscriptions or association membership, by fees charged to individuals for access to specific articles, and by advertising.42 In the open-access model, articles are available without fee to everyone, and the publisher’s revenue is generated by an article processing charge (APC) paid by the author or funder.42 Although their legitimacy has been questioned, articles in high-quality open-access journals delivered via the publisher’s platform (gold open-access) and holding the Directory of Open Access Journals (DOAJ) Seal, are comparable to similar high-quality traditionally published journals.43,44 HCPs should always consider the quality of the journal in which an article appears whether it is published traditionally or open-access.

● Be Aware and Wary of Artificial Intelligence (AI)

AI is here to stay, and it is already making inroads into medical publishing.45–50 Although still in its infancy, technology tends to grow up quickly. By design and intent, it teaches itself.48 But AI authoring is not without problems—both technical and ethical.45–50 Pharmaceutical companies are experimenting with AI to handle translations and to draft clinical study reports.51 However, ethicists and publishers are against AI authoring due to its lack of accountability and its potential to produce fraudulent and inaccurate documents.46–50 Publishers that currently allow some degree of AI use stipulate these technologies cannot be listed as authors and that their use must be disclosed.46,48–50 HCPs should watch for AI involvement in articles and consider the potential for bias and inaccuracy in their use.

Assess Bias

Despite the peer review process, significant errors—both unintentional and intentional—can make their way into the published medical literature.52 It is therefore incumbent upon HCPs who read and rely on these articles to be ever vigilant. Bias, whether introduced in the design or execution of a clinical trial (methodological bias) or in the reporting of clinical trial results (reporting bias), can influence the perceived utility of a treatment, mislead prescribers, and lead to suboptimal clinical decisions.53,54 Here we discuss several biases that can potentially skew the results of a clinical trial and/or how HCPs may perceive those results

● Methodological Bias

When reading an article reporting the results of a clinical trial, HCPs should consider whether, and if so, how, the study design (eg, randomized controlled trial versus retrospective/real-world evidence, open-label versus blinded, cross-over versus parallel treatment arms, comparators), participant selection criteria, randomization procedures, or endpoints and outcome measures may have influenced the results.54–59

When reading the medical literature, HCPs should ask themselves:

 ○ What is the question to be answered, and does the study design appropriately explore that question?58

 ○ How might bias have influenced the study design in terms of the choice of intervention or comparator, the study outcomes, and the choice of analysis?59

 ○ Might any performance biases have been introduced while the study was being conducted, such as investigators or participants being aware of the outcome measures?54,56

 ○ During the study, might deviations from the study protocol have affected the results?55,56

 ○ Could one study variable have influenced another study variable?58

 ○ How might study discontinuations have influenced the results, and was enrollment sufficient for the results to be clinically and/or statistically meaningful?54,56,60

Clinical trial registration in a publicly accessible register such as ClinicalTrials.gov has been shown to decrease the risk of bias, and HCPs can employ the Cochrane Risk of Bias (RoB2) assessment tool to assess the risk of bias in randomized controlled trials.54–56 Numerous ways to limit or reduce the risk of methodological bias in clinical trials have been proposed.52–55,59,60

● Reporting Bias

 Reporting bias is when the outcomes of a trial affect how those results are reported.53 When reading the medical literature, HCPs should consider such things as whether the data reported align with the outcome measures identified in the study methodology, or whether only favorable results may have been cherry-picked for inclusion because they support the interests (financial or otherwise) of the study sponsor or the authors.53,55–57,59 HCPs should also consider whether the statistical methodology was selected to show the data in the best light, and whether the data may have been manipulated to show statistical significance.52,59 Other reporting biases include publication bias, which may favor the publishing of certain types of research over others or preference for publishing studies that report favorable outcomes;57,59 distorting or misrepresenting results to make them appear more positive;56,59 allowing the study results to influence journal selection, the number of journals in which the results will be published, or the language in which the results will be published;57 and whether the authors or study sponsor will seek rapid publication, normal or delayed publication, or decide not to publish the results at all.53,56,57,59

● Conflict of Interest

 Conflict of interest (COI) can create biases that affect every stage of the clinical trial process, from study design through results reporting.52 Study sponsors, investigators, or authors could potentially be biased by self-interest; but as Hirsch suggested in 2009, journal editors also have the potential to inject bias into medical publishing by applying different standards to authors who are affiliated with drug companies (presuming inherent bias) versus authors who are presumably independent of drug company influence.61,62 HCPs should consider how COI statements, authors’ affiliations, and funding sources may influence the article they are about to read.

Study Limitations

Although often not reported by study authors and frequently overlooked by readers, the statement of study limitations in research articles identifies potential weaknesses that may have influenced the outcomes and conclusions of a study.63 Study limitations put results within the proper context, allow the reader to critically assess the value of the reported outcomes to clinical practice, and ensure transparency of the researchers and the research process. HCPs should be educated to spend at least as much time considering the limitations of a study as the results.

Navigate the Sea of Statistics

P-values

 Often overemphasized in the reporting of research, p-values may not be as important, or as reliable, a gauge of statistical significance as they were once considered to be.64 The American Statistical Association notes several limitations to the p-value that HCPs should consider when reading the medical literature.64 A p-value alone is not a statement about the probability that a study hypothesis is true or that the data were produced by random chance alone.64 Neither does a p-value measure the size of an effect or the importance of a result.64 When reading the medical literature, HCPs should consider whether specific results have been cherry-picked for their statistical significance.64 Authors should be transparent and thorough in their reporting of all hypotheses explored, all data collection decisions made, and all statistical analyses conducted.64

● Causation Versus Association

 Treatments lead to consequences—some beneficial, some adverse, some intended, some unintended. Although an association may be observed between a treatment and a particular outcome, this does not mean the outcome was caused by the treatment. As studies conducted during the COVID-19 pandemic most recently demonstrated, factors such as study design, sample size, and randomization can make it impossible to determine whether an outcome was caused by a treatment or merely observed.65 HCPs must be educated to critically consider whether published results support a conclusion of an observed or a causal relationship between the treatment and the outcome.63,65

Limitations

This article is limited by the accessibility of studies published in the medical literature and the authors’ interpretation of those studies.

Conclusion

HCPs rely on an overwhelming and ever-increasing volume of medical literature throughout their careers to help inform and guide clinical decision-making. Journal articles reporting the results of clinical trials are an important source for guidance, but they are not the only source. While digital enhancements and social media serve to make the medical literature more digestible, HCPs must ultimately refer to the articles that report detailed study design and methodology, results, limitations, COI, and acknowledgements for full context. Still, it can be challenging for readers to fully appreciate the myriad nuances that can affect the accuracy, utility, and applicability of reported data.

With more than 300,000 articles being published in medical journals annually, plus snippets and soundbites delivered via social and other media channels, it is understandable that HCPs could be overwhelmed by the volume of information they may feel compelled to digest in order to make sound clinical decisions and recommendations for their patients. Narrowing the choices to those that are most pertinent and appropriate can be done with the use of Impact factors, acceptance rates, altmetrics, EMPIRE index, and recommendations of colleagues; being attentive to retractions; avoiding paper mills and predatory journals and publishers; and considering quality whether a journal is published traditionally or open access. Despite the many checks and balances in place to ensure accurate and ethical reporting of clinical trial results, readers must be vigilant in considering the potential for errors as well as unintentional and intentional methodological and reporting biases, conflicts of interest, and study limitations. Reliance on p-values as an indicator of statistical significance may lead to incorrect conclusions about a treatment’s impact, and numerous factors can blur the assessment of observed versus causal relationship between a treatment and outcome.

Future continuing professional development interventions can address these challenges and thus help to improve patient care and health outcomes by educating HCPs about the utility and limitations of the various methods by which medical information is disseminated, and by giving HCPs the knowledge to critically evaluate and interpret literature-based information. Likewise, HCPs can help themselves by seeking out only the most reputable and relevant sources of information, identifying and avoiding articles that have been retracted and articles that have been published by paper mills and predatory journals and publishers, and by being vigilant in the assessment of biases, conflicts of interest, study limitations, and the use of statistics in the reporting of clinical trial results. A list of resources has been provided to aid in this pursuit (Box 1).

Box 1 Resources for Assessing the Validity and Utility of Published Research Articles

Lessons for Practice

  • Health care professionals rely on published reports of clinical trials to guide medical practice.
  • Results of clinical trials are influenced by a range of study design and reporting variables; the nuances, impact, interpretation, and application of which are rarely taught inside or beyond formal training environments.
  • Continuing professional development interventions can address this challenge and thus help to improve patient care and health outcomes by educating health care professionals about the utility and limitations of the various methods by which medical information is disseminated, and by providing the knowledge necessary to critically evaluate and interpret literature-based information.

Implications for Future Study

Extending the work of Caverly et al,8 it would be beneficial to explore the ability of trainees and attendings in various subspecialties (eg, family medicine, surgery, internal medicine) to critically interpret clinical trial results reported in the medical literature.

Funding

Funding for this article was provided by the authors.

Disclosure

The authors report no conflicts of interest in this work.

References

1. Moore DE, Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29(1):1–15. doi:10.1002/chp.20001

2. National Institutes of Health. Clinical Center. Ethics in clinical research. Ethical Guidelines. Available from: https://clinicalcenter.nih.gov/recruit/ethics.html. Accessed February 6, 2023.

3. National Institute on Aging. National Institutes of Health. What are clinical trials and studies? Available from: https://www.nia.nih.gov/health/what-are-clinical-trials-and-studies. Accessed February 6, 2023.

4. World Health Organization. Clinical trials. Available from: https://www.who.int/health-topics/clinical-trials#tab=tab_1. Accessed February 6, 2023.

5. International Society for Medical Publication Professionals. ISMPP Issues and Actions Committee. The rationale and value of medical publications. Available from: https://www.ismpp.org/assets/docs/Inititives/advocacy/the_rationale_and_value_of_medical_publications.pdf. Accessed February 6, 2023.

6. Doximity. Physician learning preferences. A Doximity report; 2022. Available from: https://assets.doxcdn.com/image/upload/pdfs/physician-learning-report-2022.pdf. Accessed March 10, 2023.

7. Korenstein D. Blinding them with science? Evidence-based medicine as a barrier to health care value. J Grad Med Educ. 2016;8(1):106–108. doi:10.4300/JGME-D-15-00570.1

8. Caverly TJ, Matlock DD, Prochazka AV, Lucas BP, Hayward RA. Interpreting clinical trial outcomes for optimal patient care: a survey of clinicians and trainees. J Grad Med Educ. 2016;8(1):57–62. doi:10.4300/JGME-D-15-00137.1

9. Moynihan CK, Burke PA, Evans SA, O’Donoghue AC, Sullivan HW. Physicians’ understanding of clinical trial data in professional prescription drug promotion. J Am Board Fam Med. 2018;31(4):645–649. doi:10.3122/jabfm.2018.04.170242

10. Boudewyns V, O’Donoghue AC, Paquin RS, Aikin KJ, Ferriola-Bruckenstein K, Scorr VM. Physician interpretation of data of uncertain clinical utility in oncology prescription drug promotion. Oncologist. 2021;26(12):1071–1078. doi:10.1002/onco.13972

11. Kamtchum-Tatuene J, Zafack JG. Keeping up with the medical literature: why, how, and when? Stroke. 2021;52(11):e746–e748. doi:10.1161/STROKEAHA.121.036141

12. Manley M, Maldonado M, Hall A, Barrett E Drinking from the fire hose of emerging medical literature. Hospitalist; 2022. Available from: https://www.the-hospitalist.org/hospitalist/article/31969/career/keeping-up-with-medical-literature/. Accessed February 17, 2023.

13. Quan MA, Newton WP. Helping family physicians keep up to date: a next step in pursuit of mastery. J Am Board Fam Med. 2020;33(Suppl):S24–S27. doi:10.3122/jabfm.S1.200154

14. Berger R, Ramaswami R. Keeping up with the medical literature. N Engl J Med Resident. 2014:360.

15. Flaxman N. How to keep up with medical literature. JAMA. 1954;154(17):1409–1410. doi:10.1001/jama.1954.02940510009004

16. Haynes RB, McKibbon KA, Fitzgerald D, Guyatt GH, Walker CJ, Sackett DL. How to keep up with the medical literature: i. Why try to keep up and how to get started. Ann Intern Med. 1986;105(1):149–153. doi:10.7326/0003-4819-105-1-149

17. National Library of Medicine. National Institutes of Health. List of all journals cited in PubMed. Available from: https://www.nlm.nih.gov/bsd/serfile_addedinfo.html. Accessed February 17, 2023.

18. National Institutes of Health. MEDLINE citation counts by year of publication (as of January 2022). Available from: https://www.nlm.nih.gov/bsd/medline_cit_counts_yr_pub.html. Accessed February 17, 2023.

19. The Lancet. Graphical abstracts. Available from: https://www.thelancet.com/infographics/graphical-abstracts. Accessed February 16, 2023.

20. Bredbenner K, Simon SM. Video abstracts and plain language summaries are more effective than graphical abstracts and published abstracts. PLoS One. 2019;14(11):e0224697. doi:10.1371/journl.ppone.0224697

21. Springer Healthcare. Adis digital features—Springer Healthcare. Available from: https://springerhealthcare.com/expertise/publishing-digital-features/. Accessed February 16, 2023.

22. McMahon GT, Ingelfinger JR, Campion EW. Videos in Clinical Medicine—A new Journal feature. N Engl J Med. 2006;354(15):1635. doi:10.1056/NEJMe068044

23. Fonseca P. Digital enhancements for primary medical manuscripts: a survey on perceptions, challenges, and needs of medical publication professionals. AMWA J. 2021;36(3):110–114. doi:10.55752/amwa.2021.46

24. Power EGM. Considerations for effective communication of medical information. Pharm Med. 2023;37(2):97–101. doi:10.1007/s40290-023-00461-3

25. Arora Y, Llaneras N, Arora N, Carillo R. Social media and physician education. Cureus. 2021;13(10):e19081. doi:10.7759/cureus.19081

26. Mishra B, Saini M, Doherty CM, et al. Use of twitter in neurology: boon or bane? J Med Internet Res. 2021;23(5):e25229. doi:10.2196/25229:

27. Yager J, Dubovsky SL, Roy-Byrne PP. Keeping up with the psychiatric literature: a survival guide. Psychother Psychosom. 2021;90(6):359–364. doi:10.1159/000517867

28. Shaughnessy AF. Keeping up with the medical literature: how to set up a system. Am Fam Physician. 2009;79(1):25–26.

29. Wu B. Keeping up with medical knowledge: how to stay on top of medical advances. Health. 2020.

30. Geiselmann M, Bitterman AD What is the significance of the impact factor on medical publishing? StatPearls; 2021. Available from: https://www.statpearls.com/ExamPrep/medical-student-resources/what-is-The-significance-of-The-impact-factor-on-medical-publishing. Accessed February 17, 2023.

31. Herbert R. Accept me, accept me not: what do journal acceptance rates really mean? ICSR Perspectives. 2019.

32. Altmetric.com. What are altmetrics? Available from: https://www.altmetric.com/about-altmetrics/what-are-altmetrics/. Accessed February 20, 2023.

33. Pal A, Rees TJ. Introducing the EMPIRE Index: a novel, value-based metric framework to measure the impact of medical publications. PLoS One. 2022;17(4):e0265381. doi:10.1371/journal.pone.0265381

34. Frampton G, Woods L, Scott DA. Inconsistent and incomplete retraction of published research: a cross-sectional study on COVID-19 retractions and recommendations to mitigate risks for research, policy and practice. PLoS One. 2021;16(10):e0258935. doi:10.1371/journalpone.0258935

35. Retraction Watch. Top 10 most highly cited retracted papers. Available from: https://retractionwatch.com/the-retraction-watch-leaderboard/top-10-most-highly-cited-retracted-papers/. Accessed February 20, 2023.

36. Candal-Pedreira C, Ross JS, Ruano-Ravina A, Egilman DS, Fernández E. Retracted papers originating from paper mills: cross sectional study. BMJ. 2022;379:e071517. doi:10.1136/bmj-2022-071517

37. Retraction Watch. Available from: https://retractionwatch.com/. Accessed February 20, 2023.

38. Else H, Van Noorden R. The battle against paper mills. Nature. 2021;591:516–519. doi:10.1038/d41586-021-00733-5

39. Committee on Publication Ethics and the Association of Scientific, Technical and Medical Publishers. Paper mills. Research report from COPE & STM. Available from: https://publicationethics.org/sites/default/files/paper-mills-cope-stm-research-report.pdf. Accessed February 20, 2023.

40. Grudniewicz A, Moher D, Cobey KD, et al. Predatory journals: no definition, no defence. Nature. 2019;576(7786):210–212. doi:10.1038/d41586-019-03759-y

41. Beall’s List. Beall’s list of potential predatory journals and publishers. 2021. Available from: https://beallslist.net/. Accessed February 20, 2023.

42. Nature.com. Publishing options. Available from: https://www.nature.com/nature/for-authors/publishing-options. Accessed February 20, 2023.

43. Björk B-C, Solomon D. Open access versus subscription journals: a comparison of scientific impact. BMC Med. 2012;10(1):73. doi:10.1186/1741-7015-10-73

44. Rodrigues RS, Abadal E, de Araújo BKH. Open access publishers: the new players. PLoS One. 2020;15(6):e0233432. doi:10.1371/journal.pone.0233432

45. Flanagin A, Bibbins-Domingo K, Berkwitz M, Christiansen SL. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA. 2023;329(8):637–639. doi:10.1001/jama.2023.1344

46. Committee on Publication Ethics. Authorship and AI tools. COPE position statement. Available from: https://publicationethics.org/cope-position-statements/ai-author. Accessed March 14, 2023.

47. Committee on Publication Ethics. Artificial intelligence (AI) and fake papers. Available from: https://publicationethics.org/resources/forum-discussions/artificial-intelligence-fake-paper. Accessed March 14, 2023.

48. World Association of Medical Editors. Chatbots, chatGPT, and scholarly manuscripts. WAME recommendations on ChatGPT and chatbots in relation to scholarly publications. Available from: https://wame.org/page3.php?id=106. Accessed March 14, 2023.

49. Nature Portfolio. Authorship. Available from: https://www.nature.com/nature-portfolio/editorial-policies/authorship. Accessed March 14, 2023.

50. Elsevier. Publishing ethics for editors. Duties of authors. The use of AI and AI-assisted technologies in scientific writing. Available from: https://www.elsevier.com/about/policies/publishing-ethics#Authors. Accessed March 14, 2023.

51. Loten A Uncertain economy spurs growth in AI-powered office automation. Companies strive to fuel growth without adding to payrolls, corporate technology chiefs say. Available from: https://www.wsj.com/articles/uncertain-economy-spurs-growth-in-ai-powered-office-automation-11675282156. Accessed March 10, 2023.

52. Orlando FA, Governale KM, Estores IM. Appraising important medical literature biases: uncorrected statistical mistakes and conflicts of interest. Front Med. 2022;9:925643. doi:10.3389/fmed.2022.925643

53. Mitra-Majumdar M, Kesselheim AS. Reporting bias in clinical trials: progress toward transparency and next steps. PLoS Med. 2022;19(1):e1003894. doi:10.1371/journalpmed.1003894

54. Lindsley K, Fusco N, Li T, Scholten R, Hooft L. Clinical trial registration was associated with lower risk of bias compared to non-registered trials among trials included in systematic reviews. J Clin Epidemiol. 2022;145:164–173. doi:10.1016/j.clinepi.2022.01.012

55. Sterne HAC, Savović J, Page MJ, et al. RoB2: a revised tool for assessing risk of bias in randomized trials. BMJ. 2019;366:I4898. doi:10.1136/bmj.I4898

56. Phillips MR, Kaiser P, Thabane L, Bhandari M, Chaudary V; and for the Retina Evidence Trials InterNational Alliance (R.E.T.I.N.A) Study Group. Risk of bias: why measure it, and how? Eye. 2022;36(2):346–348. doi:10.1038/s41433-021-01759-9

57. Boutron I, Page MJ, Higgins JPT, et al. Ch. 7: considering bias and conflicts of interest among the included studies. In: Higgins JPT, Thomas J, Chandler J, editors. Cochrane Handbook Syst Rev Int. Vol. 2. 2013.

58. Ranganathan P, Aggarwal R. Study designs: part I—an overview and classification. Perspect Clin Res. 2018;9(4):184–186. doi:10.4103/picr.PICR_124_18

59. Bradley SH, De Vito NJ, Lloyd KE, et al. Reducing bias and improving transparency in medical research: a critical overview of the problems, progress and suggested next steps. J R Soc Med. 2020;113(11):433–443. doi:10.1177/0141076820956799

60. Li Y, Izem R. Novel clinical trial design and analytic methods to tackle challenges in therapeutic development in rare diseases. Ann Transl Med. 2022;10(18):1034. doi:10.21037/atm-21-5496

61. Hirsch LJ. Conflicts of interest, authorship, and disclosures in industry-related scientific publications: the tort bar and editorial oversight of medical journals. Mayo Clin Proc. 2009;84(9):811–821. doi:10.4065/84.9.811

62. Lanier WL. Bidirectional conflicts of interest involving industry and medical journals: who will champion integrity? Mayo Clin Proc. 2009;84(9):771–775. doi:10.4065/84.9.771

63. Ross PT, Zaidi NLB. Limited by our limitations. Perspect Med Educ. 2019;8:261–264. doi:10.1007/s40037-019-00530-x

64. Wasserstein RL, Lazar NA. The ASA statement on p-values: context, process, and purpose. Am Stat. 2016;70(2):129–133. doi:10.1080/00031305.2016.1154108

65. Osborne V, Shakir SAW. what is the difference between observed association and causal association, signals and evidence? Examples related to COVID-19. Front Pharmacol. 2021;11:569189. doi:10.3389/fphar.2020.569189

Creative Commons License © 2023 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.