Back to Journals » Open Access Emergency Medicine » Volume 15

Artificial Intelligence Chatbots and Emergency Medical Services: Perspectives on the Implications of Generative AI in Prehospital Care

Authors Ventura CAI, Denton EE

Received 24 May 2023

Accepted for publication 30 August 2023

Published 7 September 2023 Volume 2023:15 Pages 289—292

DOI https://doi.org/10.2147/OAEM.S420764

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Hans-Christoph Pape



Christian Angelo I Ventura,1 Edward E Denton2,3

1Department of Health, Behavior and Society, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA; 2Department of Emergency Medicine, College of Medicine, University of Arkansas for Medical Sciences, Little Rock, AR, USA; 3Department of Health Policy and Management, Fay W. Boozman College of Public Health, University of Arkansas for Medical Sciences, Little Rock, AR, USA

Correspondence: Christian Angelo I Ventura, Tel +1 732 372-2141, Email [email protected]

Abstract: Emergency Medical Services (EMS) is likely to experience transformative changes due to the rapid advancements in artificial intelligence (AI), such as OpenAI’s ChatGPT. In this short commentary, we aim to preliminarily explore some profound implications of AI advancements on EMS systems and practice.

Keywords: EMS, artificial intelligence, prehospital, machine learning, EMT, paramedic

As with most other health-care industries, Emergency Medical Services (EMS) is likely to experience transformative changes due to the rapid advancements in artificial intelligence (AI), such as OpenAI’s ChatGPT. These large conversational language models allow users to ask questions and receive answers from AI using everyday speech.1 Two components of programs like ChatGPT are generative AI and natural language processing (NLP), which have the potential to reshape EMS clinical practice and education – creating novel avenues for clinicians, educators, and administrators to redefine the landscape of EMS. Generative AI refers to a branch of AI that focuses on creating new content by learning patterns from training data. NLP, however, involves the ability of AI systems to understand and process human language, enabling greater efficiency in data entry, documentation, and real-time decision support.2 AI is not only becoming a topic of immense public interest but also one of unforeseeable risk, which is prompting regulatory scrutiny and controversial debates.3 In this short commentary, we aim to preliminarily explore some profound implications of AI advancements on EMS systems and practice.

The integration of generative AI into prehospital clinical practice holds immense promise for enhancing diagnostic capabilities, as the incorporation of NLP algorithms provides EMS clinicians with access to a vast repository of evidence-based guidelines.4 OpenAI’s comprehensive medical knowledge database is further evidenced by its demonstrated capacity to pass several medical licensing exams.5 AI might be supportive in clinical decision-making and point of care triage assistance. Box 1 depicts a sample prehospital treatment using ChatGPT for a fictional case of a 47-year-old female patient with symptomatic bradycardia.

Box 1 Question and Answer Response Using OpenAI GPT-3.5 for a Fictional Prehospital Scenario

Immersive training simulations, enhanced through generative AI, can create dynamic virtual environments that closely replicate real-life emergency scenarios, especially in high-risk, low-occurrence (HALO) cases. These simulations offer educators and learners unparalleled opportunities to cultivate critical thinking skills, refine decision-making abilities, and enhance clinical competencies through experiential learning. NLP-powered interactive virtual assistants in EMS education also open doors to a new era of personalized learning. These intelligent companions could provide real-time guidance and feedback, addressing learner queries and fostering active engagement and knowledge retention.

A major cause for concern, however, includes “hallucinations”, in which AI convincingly generates inappropriate or inaccurate responses. A 2023 study that assessed ChatGPTs abilities to provide cardiovascular disease prevention recommendations found that 18% of responses were flagged as inappropriate for a specific patient.7 Additionally, it has been seen that the AI chatbot, will at times, continue to affirm false statements or cite research that does not exist even when questioned further.1 In the scenario featured in Box 1, ChatGPT erroneously recommends administration of 0.5 mg of atropine for bradycardia, whereas the AHA has recommended a dose of 1 mg since 2020.8 This rather small error, however, demonstrates how infrequent information program updates can have a significant impact on evidence-based patient care.

Addressing inherent biases within AI algorithms is equally critical. Rigorous evaluation, iterative improvements, and ongoing monitoring are imperative to mitigate biases and ensure equitable health-care outcomes for all patients. As of late, there have been ongoing conversations on which regulatory authorities will ultimately oversee the use of AI in patient care.9 Furthermore, as technology advances, preserving essential human connection within EMS remains pivotal. Compassion and empathy integral to EMS care must harmonize with technological advancements. In situations where a provider’s clinical judgment disagrees with recommendations provided by AI, current tort law dictates that providers should default to the standard of care.10

The interface between human providers and machine learning tools facilitated through NLP and Generative AI presents as interesting areas of research, given the recent advancements in open-source programs in conjunction with wider availability of large EMS datasets. While AI tools integrating NLP and generative AI are still far from being suitable for use as a tool in clinical care, we can expect to see the private sector expanding the inclusion of these open-source technologies into EMS device research and development with future models that are trained on clinically relevant and up-to-date data sets. AI features have become increasingly available on many mainstream email and word processors. EMS ePCR systems in particular might be an area where NLP and generative AI will prove helpful. AI chatbots are also currently useful for educators as a tool for generating personalized educational materials.

Generative AI and NLP advancements present an exciting frontier for EMS clinical practice and education. As clinicians, educators, and researchers within the EMS community, it is our collective responsibility to actively engage in the discourse surrounding these technologies. By embracing the potential of generative AI and NLP while navigating ethical considerations and integration challenges driven by research, we have the tools to revolutionize the delivery of prehospital emergency care.

Author Contributions

All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.

Funding

The work was not funded by any specific source.

Disclosure

The authors report no known conflicts of interest, financial or otherwise in this work. The work is solely that of the authors and does not necessarily represent the views, policies, or opinions of their affiliated institutions, employers, or partners. It was not reviewed or endorsed by any specific institution in particular. The work is an independent publication that was neither funded nor reviewed by the university.

References

1. Eysenbach G. The role of ChatGPT, generative language models, and artificial intelligence in medical education: a conversation with ChatGPT and a call for papers. JMIR Med Educ. 2023;9:e46885. PMID: 36863937; PMCID: PMC10028514. doi:10.2196/46885

2. Voytovich L, Greenberg C. Natural language processing: practical applications in medicine and investigation of contextual autocomplete. Acta Neurochir Suppl. 2022;134:207–214. PMID: 34862544. doi:10.1007/978-3-030-85292-4_24

3. Vearrier L, Derse AR, Basford JB, Larkin GL, Moskop JC. Artificial intelligence in emergency medicine: benefits, risks, and recommendations. J Emerg Med. 2022;62(4):492–499. PMID: 35164977. doi:10.1016/j.jemermed.2022.01.001

4. Mueller B, Kinoshita T, Peebles A, Graber MA, Lee S. Artificial intelligence and machine learning in emergency medicine: a narrative review. Acute Med Surg. 2022;9(1):e740. PMID: 35251669; PMCID: PMC8887797. doi:10.1002/ams2.740

5. Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023;2(2):e0000198. PMID: 36812645; PMCID: PMC9931230. doi:10.1371/journal.pdig.0000198

6. OpenAI, GPT-3.5. Available from: chat.openai.com/. Accessed May 17, 2023.

7. Sarraju A, Bruemmer D, Van Iterson E, Cho L, Rodriguez F, Laffin L. Appropriateness of cardiovascular disease prevention recommendations obtained from a popular online chat-based artificial intelligence model. JAMA. 2023;329(10):842–844. PMID: 36735264; PMCID: PMC10015303. doi:10.1001/jama.2023.1044

8. Merchant RM, Topjian AA, Panchal AR, et al. Adult basic and advanced life support, pediatric basic and advanced life support, neonatal life support, resuscitation education science, and systems of care writing groups. part 1: executive summary: 2020 American Heart Association Guidelines for cardiopulmonary resuscitation and emergency cardiovascular care. Circulation. 2020;142(16_suppl_2):S337–S357. PMID: 33081530. doi:10.1161/CIR.0000000000000918

9. Price WN 2nd, Gerke S, Cohen IG. Potential liability for physicians using artificial intelligence. JAMA. 2019;322(18):1765–1766. PMID: 31584609. doi:10.1001/jama.2019.15064

10. Morley J, Machado CCV, Burr C, et al. The ethics of AI in health care: a mapping review. Soc Sci Med. 2020;260:113172. PMID: 32702587. doi:10.1016/j.socscimed.2020.113172

Creative Commons License © 2023 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.