Back to Journals » Advances in Medical Education and Practice » Volume 14

The Effect of Expert Patient Simulation on Clinical Judgment: A Quasi-Experimental Study

Authors Shinde S, Tiruneh F , Fufa DA 

Received 13 January 2023

Accepted for publication 3 July 2023

Published 17 July 2023 Volume 2023:14 Pages 783—790

DOI https://doi.org/10.2147/AMEP.S402610

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 5

Editor who approved publication: Dr Md Anwarul Azim Majumder



Sanjay Shinde, Firew Tiruneh, Dinaol Abdissa Fufa

Department of Midwifery, Mizan Tepi University, Mizan Teferi, Ethiopia

Correspondence: Firew Tiruneh, Email [email protected]

Background: Worldwide, quality education is one of the important tools to improve healthcare quality. Healthcare practitioners must be competent in their clinical judgement to meet clients’ need. However, poor clinical judgment skill accounts for almost one-third of all patient problems in health care. Expert patient simulation has been used as a training method for clinical judgement skill. However, according to empirical studies, using expert patients to develop clinical judgement skill is unclear. The method is effective in one situation but not in another.
Objective: To examine the effect of expert patient simulation on the clinical judgment skill of health science students of Mizan-Tepi University.
Methods: A pre-test/post-test quasi-experimental design was used on 92 randomly selected samples from the graduating cohort of midwifery students. The research subjects who took part in the experiment were picked at random. Tools included the Creighton Competency Evaluation Instrument (C-CEI®), the Learning Satisfaction and Self-Confidence Questionnaire, and the Kolb Learning Style Inventory (LSI). The Wilcoxon-signed rank test was utilized to compare the self-confidence scores among intervention and control group of students, and the paired sample test was used to compare clinical judgment scores. Cohen’s d was used to assess the effect size, and Spearman correlation was used to explore the association.
Results: Clinical decision-making ability and self-confidence measures revealed statistical and practical differences between before and after simulation. There was a mean difference of 2.28 (95% CI, 1.78, 2.79), t (45)=9.13, p 0.001, and an effect size of 1.3, p 0.001. A pre-and post-simulation self-confidence measure showed statistically significant improvement after simulation (W = 1, Z = − 3.57, P 0.001). A moderately significant positive connection (r = 0.419, p 0.004) was also discovered.
Conclusion: The study found that human expert patient simulation is a tremendous clinical training technique for improving students’ clinical decision-making skill competency and self-confidence.

Keywords: effect, patient, simulation, clinical judgment, health science students


A Letter to the Editor has been published for this article.


Introduction

Worldwide, education is considered as a human right and one of the important tools to improve health and reduce poverty. (https://www.worldbank.org/en/topic/education/overview) Among the 17 sustainable development goals (SDGs), education quality and equity have been selected as agenda goals.1 In low- and middle-income countries, there has been a great progress in primary education and, to a lesser extent, access to higher education. However, the quality of education in higher institutions remains a critical challenge. The impact of poor educational quality of students studying midwifery has been linked to poor quality of poor quality of health care services.2

Constant change characterizes the modern healthcare system. Healthcare practitioners must be competent in order to be effective in the dynamic, complicated healthcare system and to support clients in obtaining favourable outcomes. Indeed, professionals must develop accurate and believable clinical judgment skill.3 Clinical judgment is a difficult skill to master. It is essential in clinical scenarios that are unclear by definition and rife with value conflicts between individuals with opposing agendas. Almost all health practitioners consider clinical judgment to be a vital ability.4

Diagnostic and management errors are prevalent in health care. Diagnostic mistakes account for almost one-third of all patient complications. Improving healthcare personnel’s diagnostic and clinical judgment skills as they advance through their training is part of the solution.5 Health science students are required to grasp evidence-based medicine and analytical and clinical judgment skills throughout the degree.5 Clinical judgment skill has been highlighted as vital in health care education for more than 50 years.6 Good clinical decision-makers have habits of mind that include confidence, contextual perspective, creativity, adaptability, inquisitiveness, intellectual integrity, intuition, open-mindedness, perseverance, and introspection.7

Critical thinking is utilized to generate a sound clinical judgment. The ability to think clearly and rationally about a specific clinical scenario of a patient in order to develop an appropriate clinical diagnosis is referred to as critical thinking. However, most health professional education programmes did not explicitly teach or test critical thinking until recently.8

Problem-based learning, including case studies, group discussions, self-reflection, concept mapping, and simulation, are just a few educational tools for teaching critical thinking. The degree to which simulation approaches in education mimics reality ranges from low to great. Low-fidelity replication includes anatomical model replication and peer-to-peer learning.10

Full-scale or high-fidelity manikins, often known as computerized manikins, can be built to mimic vital signs.9 These are sophisticated life-like manikins that react physiologically as if they were alive. These advanced manikins allow learners to practice clinical judgment and problem-solving skills, two major determinants of good healthcare worker education.10 However, their cost is so expensive and cannot be affordable in developing countries.11

Expert patients have been utilized in medical education and research for over 20 years, particularly in developing countries. They are normal persons thoroughly coached to accurately portray a specific patient during the history and physical examination.12 Human simulation, widely included in international undergraduate nursing curricula and incorporating active learning applicable to nursing and midwifery, may be an instructional technique for reaching these aims. Several nursing research papers, however, have investigated the efficacy of simulation-based instructional interventions.13 The stated effectiveness has varied based on the fidelity level of the simulators and the context of the outcome variables in a given scenario.14

Methods

Study Area and Period

Mizan-Tepi University is known to be the second Generation University in Ethiopia. This University is located in south-west Ethiopia, having two campuses in, one in Mizan Teferi and the other in Tepi town. The University was established in 2006 Gregorian Calendar (GC). The University was found at 24th in the national rank and at 12173rd global rank in 2020. The study was conducted in this University in the faculty of medicine and health sciences during the academic year of 2019/2020.

Study Design

A one-group pretest–posttest, quasi-experimental design was conducted among graduating class of undergraduate Midwifery students of Mizan Tepi University. The selection of this design lies on the practicality and feasibility of educational experimentation given the recent educational disruption.

Study Participants and Sample Size

Department of Midwifery was purposively selected by taking the experimentation’s feasibility into account. Then, three groups of participants were randomly selected to obtain 46 experimental group samples from three clinical attachment sites during the internship. The control group was composed by the same participants in the experimental group. In this study, 46 of participants were evaluated before and after intervention yielding a total sample size of 92. Power analyses indicated that a sample size of 92 subjects would allow the detection of moderate (0.5) effect sizes on a t-test with a power of 0.80. The sample size was calculated online by considering the following assumption α = 0.05, β = 0.02 and Population variance = 1 and effect size = 0.5.

The samples were found at the time of their internship attachment after their department placed them in different hospitals, such as Gebretsadik Shawo, Mizan-Aman and Shenen Gibe Hospitals. Total students were placed for their internship in six hospitals. Cohorts of students from randomly selected hospitals were considered in the control and experimental groups.

Intervention

Initially, before stepping into the intervention (expert patient simulation), the desired outcomes for the intervention experience were outlined. Accordingly, the goal of the simulation was to facilitate the learning of clinical judgment skills and team processes of students. The scenarios were designed to accomplish the goals of the study. This intervention involved six midwives who were peculiarly trained to portray a patient and other video, audio and images to support the scenarios in the intervention.

The intervention (expert patient simulation) was developed using clinical simulation in Nursing, standards of best practice. Accordingly, four scenarios across the experimentation were developed as per clinical simulation designing template15. The authors developed the scenarios, and their face and content validity were assessed by faculty members with clinical care and education expertise.

The scenario includes (i) Comprehensive antenatal care for normal pregnancy, (ii) Hyperemesis gravidarum without complication, (iii) First trimester bleeding (iv) Pregnant mother with Schistosomiasis (Annex 1). Each scenario was designed to have 10-minute briefing, 2-hour simulation and 20–30 minute debriefing sessions. During the simulation, only 4 students were allowed to participate at once. The simulation took eight weeks, and each week, each participant simulated on two scenarios.

During the briefing session, students are informed about the simulation’s goal, process and evaluation.

During the simulation students can discuss and support each other. They take the history from the expert patient, and the pertinent physical findings corresponding to the scenario were provided in the form of either audio-video or images. They request laboratory investigation, and the result will be sent back to them from the expert patient. Then, they make a discussion to put their diagnosis. Based on their diagnosis, they formulate their management plan.

At the end of their simulation, they justified their action and decision in each step. While students are simulating, two senior midwives note key points for feedback emphasizing on (a) Communication with the patient, (b) Principles and approach of history taking, (c) History, (d) Physical examination, (e) Laboratory investigation, (f) Management plan. At debriefing sessions, students are first allowed for self-reflection. Then, senior midwives would provide their feedback focusing on areas well-done and areas that need improvement. Then, students are provided with direction on how to improve their weak side and learning resources along with home-take assignments for the next simulation.

Data Collection Procedure

Data Collection Approach and Instrument

Data were collected by trained senior Bachelor of Science (BSc) Midwives and the researchers. Data collectors were selected based on merit in their profession. They were oriented and trained on the overall purpose of research and techniques of data collection. The data was collected before the start and after the end of the experiment using a standardized tool translated into local language Amharic. The conceptual equivalence was ensured during translation.

Demographic and Learning Characteristics Questionnaire

Demographic data, including the participant’s age, gender, prior simulation exposure, previous employment as a midwife of the care partner, year in school and reported grade point average, were collected.

Clinical Judgment/Competency

Clinical judgment skill competency data were collected using the Creighton Competency Evaluation Instrument (C-CEI®). (add ref) Creighton College of Nursing developed the tool, which has proven to be helpful and is utilized by over 190 organizations. It is intended to assess the efficacy of clinical learning in simulation environments. C-CEI is concerned with 23 general nursing behaviours classified into four categories: i) Evaluation; ii) Communication; iii) Clinical judgment; and iv) Patient safety.

A score of one (minimum competency), zero (does not meet minimum competency), or NA is assigned to each of the 23 behaviors (not applicable). The passing score is computed by dividing the total number of completed items by the total number of applicable behaviors, yielding a percentage score for each student. The passing score was set at 50%, which is the minimum expected key behavior to display.

Self-Confidence

Following the completion of the simulated practices, the Learning Satisfaction and Self-Confidence Questionnaire administered to all students to assess their satisfaction and capacity to exercise good clinical judgment. (add ref) The National League for Nursing created and validated this questionnaire.

Learning Style

This study utilized the Kolb Learning Style Inventory (LSI) to determine a student’s learning preference. (add ref) It is a well-known 12-item questionnaire intended to determine the learner’s preferred learning style based on four key scale qualities. The reliability of these scales as sub-scores is greater than 90% for each. It also indicates that the person is capable of learning through largely hands-on experiences such as laboratory projects, simulations, and practical applications.

Quality Management

A tool that has been standardized and validated was employed. The questionnaire was properly designed and amended. Data collectors received three days of training. A week before the start of actual data collection, a pre-test was conducted with approximately 10% of the sample size at the study site; based on the results of the pre-test, the questionnaire was amended and adopted, and the time needed for the interview was anticipated. Field supervisors and the primary investigator conducted field checks to ensure quality. The data’s completeness, quality, and consistency were additionally validated by double entry on the day of collection using Epi-Data.

Operational Definition

  • Critical thinking is the ability to think scientifically and rationally.
  • Clinical reasoning is the ability to reason out clinical cases based on sound knowledge and critical thinking.
  • Clinical judgment is the decision made by a health care provider about clinical case.
  • Expert patient healthy individual who is peculiarly trained to portray patient case.

Result

Socio-Demographic and Learning Characteristics of the Study Participants

A total of 46 people had finished the trial, which comprised of 2 months of simulation using expert patients. The study had a perfect response rate of 100%. The participants were all fourth-year midwifery students. The participants were mostly male (73.9%). The participants’ mean (SD) age was 23 (2.1), and their characteristics are listed in Table 1.

Table 1 Socio-Demographic and Learning Characteristics of Study Participants

Clinical Judgment/Competency

Prior to intervention, the students’ mean competency score was 12.98 (n = 46, SD = 2.53). The average clinical performance of the pre-test and post-test groups was significantly different. The study revealed that the mean competency evaluation score for the post-test group was 15.26 (n = 46, SD = 3.26). The paired sample T-test revealed that the intervention resulted in a substantially higher mean score (t(45)=9.13, p0.001). Using Cohen’s d, the effect of the intervention on clinical judgment competency for the pre-test and post-test groups revealed a substantial effect size of 1.3, p0.001. The improvement in the student’s clinical skill indicated that the intervention was considerably more effective in producing the observed result. Table 2 shows the mean difference in student competence before and after the intervention (see Table 2).

Table 2 Difference in Clinical Judgment Skill Before and After Simulation

There are four competency subdomains in the competency evaluation: assessment skill, communication skill, clinical judgment skill, and patient safety. A substantial difference was identified among the four sub-domains in patient assessment, patient safety skill, and clinical judgment competence. The communication skill result appears marginally equivalent before and after intervention but is not statistically significant (Table 3).

Table 3 Clinical Decision-Making Skill Subdomains Before and After Simulation

We assessed the proportion of students who achieved the passing score cut-off point and above using the customary 50% cut-off point. As a result, 54.3% of students received a passing score prior to intervention, while 71.7% of students received a passing score after intervention. The pre-test group’s and post-test group’s 50th percentile clinical decision-making scores were 13 and 16 out of 23 items, respectively.

Self-Confidence and Satisfaction

A matched sample test was also performed to assess the change in self-confidence after the intervention. According to Wilcoxon signed rank tests, students were much more confident in their clinical decision-making skill competency following training. The pre-test and post-test groups’ medians were 4 and 5, respectively. The Wilcoxon rank test reveals that simulation has a significant effect (W = 1, Z = −3.57, P .001). The proportion of pupils who expressed high and low self-confidence was shown.

Correlation of Self-Confidence and Clinical Judgment Skill

According to the evaluation of the simulation session, most of the students in the intervention were excited during the practice when feedback was given promptly after the simulation. Physical reactions in the groups at the start of the simulation were similar, with symptoms such as mixing words while speaking, pausing, talking quickly and repeatedly, and hands shaking. However, when they were evaluated after a series of feedbacks, the students appeared to have more positive feelings, a complete approach, and an activity with a logical flow. Following the intervention, some of the students’ distinctive characteristics included self-confidence, the coherency of activities, good planning of patient assessment and management, respectful communication among themselves, and making reasonable decisions.

To investigate the association between clinical judgment skill competency and self-confidence, a Pearson product-moment correlation was used. After the intervention, a statistically significant moderate positive correlation (r = 0.419, p0.004) was seen between clinical judgment skill competency and self-confidence.

Discussion

The results showed that expert patient simulation in a simulated environment improved participants’ clinical competency and self-confidence. Furthermore, participants were pleased with the simulated intervention and written feedback from participants showed that the simulation was well received. The concept that simulations are frequently well accepted by students is extensively backed by literature. The result we observed is consistent with prior simulation research. The nature and content of the simulation may have contributed to their satisfaction.16

Furthermore, the simulation that we constructed used a free and positive approach in which students could learn from one another and refer to any source, and professionals provided feedback with clinical competence. Furthermore, the simulation’s material is consistent with their curriculum and is based on a real-life scenario. When the method addresses a variety of learning styles and when the learning has positive learning and feedback is on time and positive, students will be satisfied with their learning and acquire self-confidence. When a learning strategy targets diverse learning styles, students are more engaged in the topic under teaching or simulation.17

Before and after simulation, participants’ mean clinical judgement competency scores were 12.98 (n = 46, SD = 2.53) and 15.26 (n = 46, SD = 3.26), respectively. The average clinical judgement skill performance of the pre-test and post-test groups was noticeably different. Participants’ scores after the simulation were statistically different from their scores before the simulation (t (45) = 9.13, p0.001). Participating in simulation tends to be helpful in enhancing clinical judgment. This outcome is consistent with previous research that assessed the influence of simulation before and after intervention.18

Several research has been conducted to investigate the impact of simulation on students’ clinical judgment. However, some distinctions amongst the research evaluated should be noted, even if they do not involve expert patients. Meyer et al studied nursing students in a pediatric clinical course, whereas Watson et al studied physiotherapy students. They reported a substantial simulation effect size with a modest sample size. This finding is identical to ours, which showed an effect size of 1.3, P0.001.19

The competency evaluation in this study contains four competency subdomains: assessment skill, communication skill, clinical judgment skill, and patient safety. Significant differences were detected across the four subdomains in patient assessment, patient safety, and clinical judgment skill. The communication skill result appears to be slightly comparable before and after the intervention, but it was not statistically different. One probable explanation is that the situations were not specially designed to test communication skills.20

While the findings show that substituting a clinical education component with simulation is a realistic clinical option that does not appear to jeopardize students’ ability to obtain professional competencies, simulation should not wholly replace clinical experiences with real patients. The findings of this study are consistent with other studies that use simulation to replace clinical experiences, indicating that simulation may be an adequate replacement for hospital-based clinical experiences in the maternal-newborn clinical area if the simulation educational environment is comparable to the study environment.21

Limitations

In this study, we acknowledge the possible limitations as an attribute of the study. A significant limitation of this study might be that it involved a relatively few samples from a single department. Involving a larger sample from multi-department might produce more insights into simulation-based education for improving healthScience students’ clinical decision-making competency.

Furthermore, this study design relied on the same student pairs to provide baseline and post-simulation scores for Clinical decision-making skill. This could introduce a scoring bias because these students also served as a control. In fact, our plan was to use nonequivalent control group; however, the restrictive protocol enacted on clinical attachment teaching due to COVID-pandemic has forced to use pre-test/post-test design. However, given the constraints, real-time assessment in pre-test/post-test design is the most robust option.

Conclusion

The study revealed that human-expert patient simulation is an excellent clinical instruction technique for improving students’ clinical decision-making skill competency. All of the students liked the simulation. Students well received each simulation session since it allowed them to participate in their learning actively. Furthermore, the technique generates a good learning atmosphere by giving students complete control over their learning. The fact that the simulation increased the student’s clinical judgement competency can also be linked to the post-simulation constructive feedback. Among the competency subdomains, clinical assessment and decision skills showed the largest improvement among students. Our investigation discovered no significant disparities in their communication skills with their patients. This may necessitate the deployment of a communication simulation template.

Ethical Approval

Mizan-Tepi University’s College of Health Sciences' ethical committee approved. (MW/EC/1142/11) is the number of ethical clearance. An official letter from Mizan-Tepi University’s college of health science was sent to each interested entity to gain their cooperation. To protect participants’ rights, each questionnaire included an explanation letter. In addition, all respondents were asked to participate in the study and were given thorough explanations of the study’s objectives. Each participant was granted and kept anonymous and secret by signing a consent form. The participants informed consent included the publication of anonymized responses. They were free to withdraw at any point during the interview. Their participation was not compelled in any way.

Acknowledgments

We want to thank all study participants and data collectors for their contribution to the success of our work.

Author Contributions

All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.

Disclosure

The authors declare no competing interests.

References

1. Department of Economic and Social Affairs, United Nations. Transforming our world: the 2030 agenda for sustainable development 2015 https://sdgs.un.org/publications/transforming-our-world-2030-agenda-sustainable-development-17981.

2. Ababa A. Ethiopian education development roadmap an integrated executive summary. 2018:1–101.

3. Higgs J, Jones MA. Clinical Reasoning in the Health Professions. Oxford: Butterworth-Heinemann; 2000.

4. Yuan H, Kunaviktikul W, Klunklin A, Williams BA. Improvement of nursing students’ critical thinking skills through problem-based learning in the People’s Republic of China: a quasi-experimental study. Nurs Health Sci. 2008;10(1):70–76. doi:10.1111/j.1442-2018.2007.00373.x

5. Alinier G, Hunt BGR, Harwoo. C. Effectiveness of intermediate-fidelity simulation training technology in undergraduate nursing education. J Adv Nurs. 2006;54(3):359–369.

6. Team AM& LJ. Playing it safe: simulated Journal, training in the OR. Am Oper Room Nurs. 2008;87(4):772–779.

7. Bandura A. The nature and structure of self-efficacy. In: Self-Efficacy: The Exercise of Control. Newyork: WH Freeman and Company; 1997.

8. Sharples JM, Mahtani KR, Chalmers I, et al. Critical thinking in healthcare and education. BMJ. 2017;2234:16–18.

9. Raurell-Torreda M, Romero-Collado A. Simulation-based learning in nurse education: systematic review. J Adv Nurs. 2015;12(6):392–394.

10. ZarifSanaiey N, Amini M, Saadat F. A comparison of educational strategies for the acquisition of nursing student’s performance and critical thinking: simulation-based training vs. integrated training (simulation and critical thinking strategies). BMC Med Educ. 2016;16(1):1–7. doi:10.1186/s12909-016-0812-0

11. Lapkin S and Levett-Jones T. (2011). A cost-utility analysis of medium vs. high-fidelity human patient simulation manikins in nursing education. Journal of Clinical Nursing, 20(23–24), 3543–3552. 10.1111/j.1365-2702.2011.03843.x

12. Ladyshewsky RK. A quasi-experimental study of the differences in performance and clinical reasoning using individual learning versus reciprocal peer coaching. Physiother Theory Pract. 2002;18(1):17–31. doi:10.1080/095939802753570666

13. Armstrong L. Learning style considerations are important to teaching critical thinking; 2005.

14. Adams B. Nursing education for critical thinking: an integrative review. J Nurs Educ. 1999;38(3):111–119. doi:10.3928/0148-4834-19990301-05

15. Dale-Tam J and Thompson K. (2022). Evolution of a Simulation Design Template at a Canadian Academic Hospital. Clinical Simulation in Nursing, 73 17–20. 10.1016/j.ecns.2022.10.001

16. Vlachopoulos D, Makri A. Effect of games and simulations on higher education: a systematic literature review. Int J Educ Technol High Educ. 2017;10(1):145–156.

17. Gaba D, Howard SK, Fish K Simulation is quite in line with their curriculum and it is based on the real life situation. 2001:175–193.

18. Jang A, Moon J. The effect of nursing simulation on the clinical judgment of nursing care for patients with Increased Intracranial Pressure (IICP). Iran J Public Health. 2021;50(10):2055–2064. doi:10.18502/ijph.v50i10.7506

19. Walker CA, Roberts FE. Impact of simulated patients on physiotherapy students’ skill performance in cardio respiratory practice classes: a pilot study. Physiother Can. 2020;72(3):314–322.

20. Bambaeeroo F, Shokrpour N. The impact of the teachers’ non-verbal communication on success in teaching. J Adv Med Educ Prof. 2017;5(2):51–59.

21. Zhong B, Sarkar M, Menon N, et al. Obstetric neonatal emergency simulation workshops in remote and regional South India: a qualitative evaluation. Adv Simul. 2021;6(1):36. doi:10.1186/s41077-021-00187-9

22. Jeffries PR. Simulation in Nursing Education: From Conceptualization to Evaluation. New York: National League for Nursing; 2007.

Creative Commons License © 2023 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.