Promoting student case creation to enhance instruction of clinical reasoning skills: a pilot feasibility study
Received 29 October 2017
Accepted for publication 31 January 2018
Published 12 April 2018 Volume 2018:9 Pages 249—257
Checked for plagiarism Yes
Review by Single-blind
Peer reviewers approved by Dr Maria Olenick
Peer reviewer comments 2
Editor who approved publication: Dr Anwarul Azim Majumder
Hamsika Chandrasekar,1 Neil Gesundheit,2 Andrew B Nevins,3 Peter Pompei,4 Janine Bruce,5 Sylvia Bereknyei Merrell6
1Department of Pediatrics, Boston Children’s Hospital, Boston, MA, USA; 2Department of Medicine, Division of Endocrinology, Stanford University School of Medicine, Stanford, CA, USA; 3Department of Medicine, Division of Infectious Diseases, Stanford University School of Medicine, Stanford, CA, USA; 4Department of Medicine, Division of Primary Care and Population Health, Stanford University School of Medicine, Stanford, CA, USA; 5Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA; 6Department of Surgery, Stanford University School of Medicine, Stanford, CA, USA
Background: It is a common educational practice for medical students to engage in case-based learning (CBL) exercises by working through clinical cases that have been developed by faculty. While such faculty-developed exercises have educational strengths, there are at least two major drawbacks to learning by this method: the number and diversity of cases is often limited; and students decrease their engagement with CBL cases as they grow accustomed to the teaching method. We sought to explore whether student case creation can address both of these limitations. We also compared student case creation to traditional clinical reasoning sessions in regard to tutorial group effectiveness, perceived gains in clinical reasoning, and quality of student–faculty interaction.
Methods: Ten first-year medical students participated in a feasibility study wherein they worked in small groups to develop their own patient case around a preassigned diagnosis. Faculty provided feedback on case quality afterwards. Students completed pre- and post-self-assessment surveys. Students and faculty also participated in separate focus groups to compare their case creation experience to traditional CBL sessions.
Results: Students reported high levels of team engagement and peer learning, as well as increased ownership over case content and understanding of clinical reasoning nuances. However, students also reported decreases in student–faculty interaction and the use of visual aids (P < 0.05).
Conclusion: The results of our feasibility study suggest that student-generated cases can be a valuable adjunct to traditional clinical reasoning instruction by increasing content ownership, encouraging student-directed learning, and providing opportunities to explore clinical nuances. However, these gains may reduce student–faculty interaction. Future studies may be able to identify an improved model of faculty participation, the ideal timing for incorporation of this method in a medical curriculum, and a more rigorous assessment of the impact of student case creation on the development of clinical reasoning skills.
Keywords: case-based learning, undergraduate medical education, student case creation
Plain language summary
This study sought to explore the impact of student “case creation”, or student generation of clinical cases, on students’ perceived clinical reasoning skills, quality of student–faculty interaction, and interaction with fellow teammates. Ten students at Stanford University School of Medicine participated in a pilot study in spring 2014, wherein they worked in groups of five to generate a clinical case around a specified diagnosis using “case creation” as a teaching/learning tool. Students took part in pre- and post-study surveys, as well as student focus groups, designed to look at each of the three designated outcomes. Two faculty members helped facilitate the pilot study and also participated in a faculty focus group. Both qualitative and quantitative analyses were performed. Results suggest that student “case creation” can be an effective means of cultivating teamwork and providing an active, student-initiated means of clinical reasoning instruction. However, these gains may come at the cost of decreased levels of student–faculty interaction. Overall, this study suggests that student “case creation” could be an additional teaching/learning tool for instructors to rely upon when engaging students in clinical reasoning instruction.
Clinical reasoning can be a challenging skill to teach early medical students because it requires not only the ability to gather data but also the ability to synthesize and interpret the information. One method of clinical reasoning instruction that has been widely applied is the use of patient-based cases, wherein clinical instructors select specific clinical scenarios designed to allow medical students to practice clinical reasoning in a structured environment.1
Many medical schools, including our own, traditionally use a case-based learning (CBL) format to provide clinical reasoning instruction. CBL is described as a “guided inquiry approach”, wherein small groups of students focus on creative problem solving with some advance preparation, such as reading an assigned textbook chapter.2 Starting in the spring of their first year, Stanford medical students work in groups of six to ten students, with one to two faculty members guiding students through one or more clinical cases over the course of three to four hours.
Previous studies suggest that learning can be enhanced further when learners themselves are encouraged to generate content. In 2004, Ryan and Marlow described a method called “build-a-case”, wherein multiple members of the medical team, including nurses, residents, and faculty, but not including medical students, came together to create a case. The authors suggested that this exercise not only engaged learners in a more reflective dialogue but also addressed one of the greatest challenges in problem-based learning – having only a limited number of cases for learners to solve.3 Palmer and Devitt further built on this concept by engaging medical students in the task of creating multiple-choice questions. They found that students were capable of generating high-quality questions that had potential for inclusion in both formative and summative assessments. However, the students found the activity unfamiliar and, ultimately, the studied group of students did not perform better than the control group.4 Jobs et al also studied the idea of student-generated multiple-choice questions and found that the task favored low- more than high-performing students and did not deem it an appropriate method to include in curricula.5
The notion of engaging students in case creation has been examined previously in advanced medical students. A study conducted at Indiana University School of Medicine engaged four Master of Science in Medical Science (MSMS) students in creating cases directed towards first-year MSMS students as well as medical students through a 15-week case-writing course.6 A 2001 study, also at Indiana University, took this one step further, engaging third- and fourth-year medical students in case creation as part of a 4-week course.7 However, in both of these case-writing courses, participants engaged in case creation with the primary goal of furthering other students’ learning and not their own understanding of case material.
To our knowledge, student case creation has not been tested previously as a modality of clinical reasoning instruction in a pre-clerkship medical curriculum. More specifically, case creation has not been previously tested with first-year medical students. We conducted this study to examine three outcome measures: 1) tutorial group effectiveness, 2) student–teacher interaction, and 3) perceived clinical reasoning gains.
Ten first-year medical students and two clinical faculty members volunteered to participate in this pilot curriculum. All participants were affiliated with Stanford University School of Medicine. All participants signed a written informed consent form prior to participating in the study. Student participants were compensated with dinner during the study, as well as a $20 gift from the Stanford bookstore.
Structure and content of case creation session
Students participated in one 3-hour pilot study in May 2014. At this point in their curriculum, participants had taken part in five CBL sessions and, as such, were familiar with the structure and content of clinical teaching cases. The ten students were split into two groups of five students each, with one facilitator for each group. During the first 2 hours, students developed patient cases around the preassigned diagnosis of “dilated cardiomyopathy”, with intentionally limited faculty input. Faculty answered a few student questions during the session but otherwise remained as bystanders, allowing for a student-driven process. At the end of the 2-hour period, all students participated in a 1-hour feedback session with the faculty facilitator on case quality, discussing issues such as choice of etiology and providing clues to narrow down the differential diagnosis.
Figure 1 Structure of case creation session.
Abbreviations: SFDP-26, Stanford Faculty Development Program Clinical Teaching Instrument; TGEI, Tutorial Group Effectiveness Instrument.
Student survey feedback
The Stanford Faculty Development Program Clinical Teaching Instrument (SFDP-26)8 was used to assess the level of student–teacher interaction the students experienced during their prior traditional CBL curriculum (the “before” survey, based on recall from prior sessions in the curriculum) and their experiences during the case creation session (the “after” survey). This validated instrument included twenty-six items in seven domains, including learning climate, control of session, communication of goals, promoting understanding and retention, evaluation, and feedback.
The Tutorial Group Effectiveness Instrument (TGEI)9 was used immediately after the case creation session to assess the group effectiveness as a result of this exercise and the teamwork the students experienced during the session. This validated survey consisted of twenty statements for students to agree or disagree with, in three different domains: cognitive aspects (eg, “In the tutorial, group explanations of the subject content were given in own words”), motivational aspects (eg, “The tutorial group stimulated my self-study activities”), and demotivational aspects (eg, “During the course of the tutorial, some group members contributed less to the tutorial group discussion”).
Survey responses from both questionnaires were recorded on a 5-point Likert scale (1 = strongly disagree and 5 = strongly agree). Quantitative data from the surveys were analyzed with descriptive statistics, using a Student’s paired t-test to compare pre- and post-study results of the SFDP-26. No statistical adjustment was made for multiple comparisons. In addition, we calculated overall means for students’ responses on both the SFDP-26 and the TGEI (Microsoft Excel for Mac 2011, Version 14.6.2.).
Student and faculty focus groups
Student and faculty participants also took part in separate focus groups at the end of the session (questions in the Supplementary material). Student focus groups were run immediately after the case creation session and were led by students who were involved in the study design. The faculty-specific focus group was also conducted after the session by the primary author (HC), providing faculty with the opportunity to reflect on their facilitation experiences. One student was unable to attend the student focus group and was interviewed separately by the primary author (HC) with the same focus group questions. All participants verbally agreed to be audiotaped during these sessions for the purposes of this study. Student and faculty focus group transcripts were transcribed and de-identified by the primary author (HC). Three researchers instructed in qualitative data analysis inductively developed the codebook over five passes and came to agreement for all code definitions. We then applied the finalized codebook, consisting of eighteen codes, to excerpts across all three transcripts. We tested for inter-rater reliability of all coders and code applications, ultimately reaching high inter-rater reliability, with κ of 0.78. Finally, the research team (HC, NG, JB, SBM) thematically analyzed the coded excerpts to identify emergent themes about student and faculty experiences and evaluate the applicability of the case creation method.10 Dedoose was used to manage the data and facilitate the coding and thematic analysis process (Dedoose, Version 5.2.1.; SocioCultural Research Consultants, LLC, Los Angeles, CA, USA).
Human subjects research
This study was approved by the Stanford Institutional Review Board under the exempt review category for research on the effectiveness of instructional techniques, curricula, or classroom management methods.
A total of ten students volunteered to participate in this feasibility study. All ten (100%) students completed the pre- and post-pilot study questionnaires regarding tutorial group effectiveness and quality of student–faculty interaction. Nine (90%) students participated in the focus group, while one student was interviewed separately using the same set of questions.
The comparative results of the SFDP-26 are shown in Table 1. Overall, compared to their traditional CBL sessions, students reported significant increases in the amount of negative (corrective) feedback they received in the case creation session. Students reported decreases in the level to which facilitators listened to learners, used blackboard or other visual aids, explicitly encouraged further learning, and encouraged learners to do outside reading in their case creation session.
Table 1 Quality of student–teacher interaction in traditional CBL session versus case creation session
Notes: We used a 5-point Likert scale for each statement from the Stanford Faculty Development Program Clinical Teaching Instrument,8 with a range: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree. Data are presented as mean and standard deviation. *These values are significant at P ≤ 0.05.
Abbreviation: CBL, case-based learning.
Table 2 illustrates students’ perceptions of tutorial group effectiveness during the pilot study on the TGEI. Overall, students reported high levels of teamwork and collaborative learning, with an average response of 4.6 (on a 5-point scale) for overall group productivity.
Table 2 Post-study survey results on tutorial group effectiveness
Notes: We used a 5-point Likert scale for each statement from the Tutorial Group Effectiveness Instrument,9 with a range: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, and 5 = strongly agree. Data are presented as mean and standard deviation. Note that for the items under Factor 3, a lower score is more positive, that is, indicates a decreased level of demotivational factors.
Abbreviation: SD, standard deviation.
Finally, Table 3 describes themes that emerged during the student focus group session (Supplementary material), which explored students’ perceptions on case creation compared to their traditional CBL sessions. General themes were divided into two domains: “comparison to traditional clinical reasoning” and “session design considerations”. Both of these domains contained two to three qualitative themes. The most common positive features of the student focus group were increased ownership over case content, engagement in clinical reasoning, and understanding of clinical nuances. Students reported that they would have liked more interspersed and directed feedback from faculty members, as well as some additional guidance regarding the etiology of the assigned diagnosis. This latter opinion stemmed from the fact that both case creation groups spent a significant portion of their two hours discussing the most likely etiology for the assigned diagnosis of dilated cardiomyopathy, allowing limited time to focus on the remainder of the case.
Table 3 Student and faculty perceptions of case creation versus traditional case-based learning sessions
In addition to describing student focus group themes, Table 3 also illustrates themes that emerged during the faculty focus group. The most common positive features of the faculty focus group were increased levels of engagement and teamwork. Faculty members did, however, express reservations regarding these first-year students’ clinical experience and how that limited their ability to create a high-quality patient case that was on par with a faculty-generated case. In addition, faculty members echoed the student opinion that the opportunity for interspersed faculty feedback could have been more productive.
Student case creation is feasible and may be a creative adjunct to standard methods of clinical reasoning instruction. Students and faculty alike were notably impressed by the level of engagement and teamwork present during this activity, as reflected by findings from both the TGEI survey responses and focus groups.9 This led students to feel that they had developed an appreciation for the nuances required to create a patient case, which in turn may contribute to a deeper comprehension of clinical reasoning skills. These results are consistent with findings from a study by White et al that suggested that medical students are less likely to “check out” of classroom studies when they are engaged in active learning.11 In particular, this study recommended shaping a curriculum to allow medical students to progress from solitary learners to collaborative learners in a way that is engaging. Case creation, by eliciting active student participation, is a key component of early preclinical instruction that prepares students for a team-centered workplace.
For both students and faculty, gains appear to be part of a trade-off in the area of student–faculty interactions, as reflected by focus group responses. Both student and faculty participants in the study reported that additional opportunities to interact during the session would have been valuable. Adult learning theory suggests that clinical reasoning can be enhanced by repeated, deliberate exposure to real cases. Moreover, educational theory suggests that the participation of an instructor can augment the value of an educational experience by immediately pointing out and discussing any errors in information, judgment, and reasoning.1 Thus, the design of our pilot study may have been enhanced by embedding additional opportunities for feedback throughout the session.
In their responses to the SFDP-26, students expressed additional reservations, such as limited use of visual aids and decrease in the ability to engage in learning outside of the classroom.8 However, this study presented the case creation method to students as a one-time activity, separate from the other CBL activities in their curriculum. We propose this method as an adjunct to, rather than a replacement for, current methods. We hypothesize that with more consistent use of this case creation method, in combination with traditional CBL and similar curricular activities, the use of visual aids and outside class learning may increase due to opportunities to make connections between topics in the case creation session and topics in the main curriculum.
Faculty members acknowledged that these first-year students had limited clinical experience, which in turn restricted the extent to which they could develop a high-quality patient case. Indeed, previous studies suggest that ideal learning cases are those that are based on real patients, as opposed to patients that are invented.1 However, we suggest that an early introduction to clinical reasoning and case generation still has educational value. Moreover, we would expect that with repeated engagement in the case creation process, students will be better trained to generate exemplary cases not only for their own understanding of content but also for the use of fellow students.
Our small sample of ten first-year medical students from a single, private university may not accurately reflect the broader pre-clerkship student population – either at our own or at other institutions. Moreover, our two participating faculty members may not represent the wide range of clinical teaching styles present in various clinical instruction sessions. Thus, our study cohort may have biased both our quantitative and qualitative findings. However, the goal of our study was to determine feasibility of this method first, and then pursue additional opportunities to continue studying this learning modality. Another limitation is that we only evaluated clinical reasoning gains through focus group responses. Script concordance testing (SCT) is a well-known method that has been used to assess clinical reasoning skills.12 It has been shown to have good internal consistency reliability, although some studies have questioned its validity.13,14 Past studies have shown that SCT is useful for evaluating a learner’s clinical reasoning progress over time.15 We did not apply SCT in our study because our primary aim was to determine feasibility and student response to case creation as a curricular activity. Indeed, an important next step will be to examine changes in students’ SCT scores over each case creation session during a longitudinal analysis of the case creation method.
The results of our feasibility study suggest that student-initiated case creation can be an engaging technique for clinical reasoning instruction, may increase student ownership over content, encourage student-directed learning, and provide opportunities to explore nuances that distinguish various entities in a differential diagnosis. These gains appear to be accompanied by a trade-off, however, particularly in the area of student–faculty interaction, because students and faculty missed additional opportunities to interact during the session, which might have been valuable. The case creation method may have greater benefit later in the curriculum – for example in the second or third year of medical school rather than the first – when students have had more clinical experiences and knowledge to draw upon. Further research may focus on a more rigorous evaluation of this method and how it can be incorporated into clinical reasoning sessions in the pre-clerkship curriculum.
The authors gratefully acknowledge the financial support of the Stanford Medical Scholars Fellowship Program. In addition, the authors acknowledge Jai Madhok, who assisted in the design and implementation of this pilot study, as well as Benjamin Robison, who came up with the initial study idea, contributed significantly to study design, and assisted in execution of the pilot study. They also thank the medical student and faculty participants.
The authors declare that they have no competing interests in this work.
Kassirer JP. Teaching clinical reasoning: case-based and coached. Acad Med. 2010;85(7):1118–1124.
Srinivasan M, Wilkes M, Stevenson F, Nguyen T, Slavin S. Comparing problem-based learning with case-based learning: effects of a major curricular shift at two institutions. Acad Med. 2007;82(1):74–82.
Ryan DP, Marlow B. Build-a-case: a brand new continuing medical education technique that is peculiarly familiar. J Contin Educ Health Prof. 2004;24(2):112–118.
Palmer E, Devitt P. Constructing multiple choice questions as a method for learning. Ann Acad Med Singapore. 2006;35(9):604–608.
Jobs A, Twesten C, Göbel A, Bonnemeier H, Lehnert H, Weitz G. Question-writing as a learning tool for students—outcomes from curricular exams. BMC Med Educ. 2013;13:89.
Agbor-Baiyee W. Problem-based learning case writing in medical science. Paper presented at: Annual Meeting of the American Educational Research Association; April 1–5, 2002; New Orleans, LA.
Peavy DE. A new PBL case-writing course. Acad Med. 2001;76(2):108–109.
Litzelman DK, Westmoreland GR, Skeff KM, Stratos GA. Factorial validation of an educational framework using residents’ evaluations of clinician-educators. Acad Med. 1999;74(10 Suppl):S25–S27.
Singaram VS, Van Der Vleuten CP, Van Berkel H, Dolmans DH. Reliability and validity of a tutorial group effectiveness instrument. Med Teach. 2010;32(3):e133–e137.
Miles MB, Huberman AM, Saldaña J. Qualitative Data Analysis: A Methods Sourcebook. 3rd ed. Thousand Oaks, CA: SAGE Publications, Inc.; 2014.
White C, Bradley E, Martindale J, et al. Why are medical students ‘checking out’ of active learning in a new curriculum? Med Educ. 2014;48(3):315–324.
Charlin B, Roy L, Brailovsky C, Goulet F, van der Vleuten C. The script concordance test: a tool to assess the reflective clinician. Teach Learn Med. 2000;12(4):189–195.
Lubarsky S, Charlin B, Cook DA, Chalk C, van der Vleuten CP. Script concordance testing: a review of published validity evidence. Med Educ. 2011;45(4):329–338.
Lineberry M, Kreiter CD, Bordage G. Threats to validity in the use and interpretation of script concordance test scores. Med Educ. 2013;47(12):1175–1183.
Humbert AJ, Miech EJ. Measuring gains in the clinical reasoning of medical students: longitudinal results from a school-wide script concordance test. Acad Med. 2014;89(7):1046–1050.
Table S1 Focus group questions
Abbreviation: POM, practice of medicine.
This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.Download Article [PDF]