Back to Journals » Advances in Medical Education and Practice » Volume 9

Medical students’ logbook case loads do not predict final exam scores in surgery clerkship

Authors Alabbad J , Abdul Raheem F , Almusaileem A, Almusaileem S, Alsaddah S , Almubarak A

Received 21 December 2017

Accepted for publication 5 March 2018

Published 18 April 2018 Volume 2018:9 Pages 259—265

DOI https://doi.org/10.2147/AMEP.S160514

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Md Anwarul Azim Majumder



Jasim Alabbad,1,2 Fawaz Abdul Raheem,2 Ahmad Almusaileem,1 Sulaiman Almusaileem,1 Saba Alsaddah,2 Abdulaziz Almubarak2

1Department of Surgery, Faculty of Medicine, Kuwait University, Kuwait City, Kuwait; 2Department of Surgery, Mubarak Al-Kabeer Hospital, Jabriya, Kuwait

Purpose: To investigate the reliability of medical student logbook data in assessing student performance and predicting outcomes in an objective standardized clinical exam and a multiple-choice exam during surgery rotation. In addition, we examined the relationship between exam performance and the number of clinical tutors per student.
Materials and methods: A retrospective review of the logbooks of first and third clinical year medical students at the Faculty of Medicine, Kuwait University, was undertaken during their surgery rotation during the academic year 2012–2013.
Results: Logbooks of 184 students were reviewed and analyzed. There were 92 and 93 students in the first and third clinical years, respectively. We did not identify any correlation between the number of clinical encounters and clinical exam or multiple-choice exam scores; however, there was an inverse relationship between the number of clinical tutors encountered during a rotation and clinical exam scores.
Conclusion: Overall, there was no correlation between the volume of self-reported clinical encounters and exam scores. Furthermore, an inverse correlation between the number of clinical tutors encountered and clinical exam scores was detected. These findings indicate a need for reevaluation of the way logbook data are entered and used as an assessment tool.

Keywords: OSCE, assessment, Kuwait, universities, rotation

A Letter to the Editor has been received and published for this article

Introduction

Professional development of medical students occurs during their years of clinical training. Throughout this time, medical students have opportunities to develop their clinical and diagnostic skills and to begin to formulate management plans under the guidance of physicians and clinical tutors. The rationale for this type of practical training is that, as medical students engage with patients more frequently, increasing their clinical encounters, they will hone their skills, eventually translating into improved performance in standardized exams, in both multiple-choice question (MCQ) format and practical clinical tests.13

Medical student logbooks are regular records of their observations and experiences and are intended to allow faculty staff to monitor their performance and progress during their various clinical rotations, as well as providing a means to quantify and standardize their clinical encounters.2 Logbooks are also designed to assess both the quality of a rotation and the adequacy of the clinical experience of the student. The majority of medical schools require their students to document the cases they encounter. Since students generally spend time in different hospitals during a rotation, in theory logbooks ensure a uniform caseload between various hospitals and adherence to the course guidelines and objectives set by faculty staff.

While great emphasis is placed on the use of logbooks by medical students as part of the requirement for accreditation by medical schools,2 the literature is inconsistent regarding the outcomes associated with their use. The intuitive notion that more frequent clinical encounters by medical students will result in a better grasp of clinical skills and improved diagnostic abilities has come under scrutiny in the era of contemporary medical education theories. There is growing evidence suggesting that the number of clinical encounters cannot act as a proxy for performance in written and clinical exams.4,5 In contrast, the objective structured clinical examination (OSCE) is a method increasingly used to assess student performance, and evidence of the validity of this test is mounting.6,7 The aim of this study was to investigate the correlation between self-reported logbook caseloads and performance in objective assessment tools (MCQ and OSCE) at the Faculty of Medicine, Kuwait University.

Materials and methods

Study setting

Kuwait University’s Faculty of Medicine is composed of a 7-year program, divided into three phases: premedical, preclinical, and clinical. The clinical phase is a 3-year program where students rotate in surgery during the first and third clinical years. Students rotate in five different hospitals for a period of 12 weeks during their first clinical year (rotating through general surgery, orthopedic surgery, and urology) and for a period of 10 weeks during their third clinical year (rotating through general surgery, accident and emergency, and vascular surgery). At the end of the rotation, students are evaluated by written and clinical exams. Each student is provided with a logbook, which is to be completed during his or her surgical rotation.

Data collection

During the academic year 2012–2013, a retrospective review of first and third clinical year medical student logbooks completed during their surgery rotations was performed at the Faculty of Medicine, Kuwait University. The Health Sciences Center Ethical Committee of Kuwait University approved the study. The ethical committee did not require that students provide written consent to participate in the study because the study carried no risk to the participants. A total of 185 students were enrolled in the first and third clinical years during the academic year 2012–2013. All students submitted a complete logbook at the end of their rotations. One student did not attend the final examination and was excluded from the analysis.

Logbooks

Each student was required to maintain a log of his or her clinical experience during their rotation. Clinical encounters were divided into long or short cases. Long cases consisted of taking a full medical history and conducting a physical examination, followed by discussion with a clinical tutor. Short cases consisted of a focused history, or clinical examination, supervised by a clinical tutor. Patient interactions that were not supervised or discussed with a clinical tutor (ie, those discussed with residents) were not logged. No minimum or maximum numbers of logbook entries were required, and no marks based on entries were awarded toward their final grades. When a clinical encounter was logged, the supervising clinical tutor signed the logbook, with the aim of minimizing fabrications. While no marks were awarded for the logbook, failure to return a completed logbook would be deemed to indicate unsatisfactory rotation performance and prevent the student from sitting the final exam.

Assessment of student performance

A comprehensive OSCE was conducted at the end of the year, which included an assessment of skills in taking patient histories, proficiency in physical examination of both real and simulated patients, communication skills, basic procedural skills, and multimedia-type clinical vignette style questions, to assess clinical competency. Each station was 7 minutes long, and standardized checklists were used to evaluate student performance. In addition to the OSCE, an MCQ exam accounted for 40% of the grade.

Outcomes

Primary outcome

Our hypothesis was that the number of cases logged by a student does not correlate with their scores in the OSCE or MCQ exams.

Secondary outcome

Our secondary hypothesis was that interaction with a higher number of clinical tutors is positively correlated with OSCE and MCQ results. Due to the various teaching locations, each student encountered a different number of supervising clinical tutors during surgical rotations. We calculated the number of supervising clinical tutors from each logbook and correlated total numbers with OSCE and MCQ scores.

Statistical methods

Continuous variables were summarized as means and standard deviation (SD). Categorical variables were described as frequencies and percentages. Univariate linear regression analyses were used to evaluate the associations of the number of clinical encounters and number of clinical tutors with OSCE and MCQ scores. Statistical analyses were performed using the software IBM® SPSS® statistics version 24 (SPSS, Inc., Chicago, IL, USA). P-values ≤0.05 were considered statistically significant.

Results

All students completed their logbooks and submitted them at the end of their rotations. A total of 184 student logbooks were analyzed. Ninety-one students (49.5%) were in the first clinical year and 93 (50.5%) were in the third clinical year. One hundred and five students (57.1%) were female, and 79 students (42.9%) were male.

Students reported a total of 10,472 clinical encounters, with a mean of 56.9 encounters per student (SD = 19.2). Long cases comprised 38.8% of total encounters. The mean number of supervising clinical tutors was 13.1 per student (SD = 4.3). Mean OSCE and MCQ scores were 76.7% (SD = 9.2%) and 63.0% (SD = 11.3%), respectively.

Univariate linear regression analyses indicated that the number of clinical encounters did not correlate with OSCE or MCQ scores (Table 1 and Figures 1 and 2). Separation of the cohorts by study year indicated similar results for junior students; however, we identified a positive correlation between the number of clinical encounters and MCQ/OSCE scores for senior students (Table 2). There was also an inverse correlation between the total number of clinical tutors per student and OSCE and MCQ scores (Table 1 and Figures 3 and 4).

Table 1 Correlations of logbook data with OSCE and MCQ scores

Notes: aDependent variable, OSCE. bDependent variable, MCQ.

Abbreviations: OSCE, objective structured clinical examination; MCQ, multiple-choice question examination.

Figure 1 Correlation between the number of clinical encounters and OSCE score.

Abbreviation: OSCE, objective structured clinical examination.

Figure 2 Correlation between the number of clinical encounters and MCQ score.

Abbreviation: MCQ, multiple-choice question examination.

Table 2 Correlations of log book data with OSCE and MCQ scores according to student year of study

Notes: aDependent variable, OSCE. bDependent variable, MCQ.

Abbreviations: OSCE, objective structured clinical examination; MCQ, multiple-choice question examination.

Figure 3 Correlation between the number of clinical tutors and OSCE score.

Abbreviation: OSCE, objective structured clinical examination.

Figure 4 Correlation between the number of clinical tutors and MCQ score.

Abbreviation: MCQ, multiple-choice question examination.

Discussion

In this study, we explored the value of surgical student logbook data as an assessment tool. Overall, the number of clinical encounters did not correlate with clinical competence as assessed by OSCE or MCQ. These results confirm those of previous studies that failed to identify a correlation between the volume of clinical encounters logged by students and their clinical competence;1,8,9 however, stratification of students based on study year group indicated that, while data from the first clinical year group also supported the findings of previous investigations, the analysis of third-year group data revealed a positive correlation between the number of clinical encounters and exam scores. This may be attributable to the fact that senior students are inclined to be selective in their choice of settings, favoring those that provide greater opportunities for learning.

A review of the medical education literature supported our findings. Huang et al1 studied the correlation between student logbooks in various specialties, including surgery, and their final clerkship grades. They found that volume of clinical experience did not translate into superior clerkship grades. Similarly, Poisson et al9 evaluated student logbooks during neurology clerkship rotations and concluded that higher numbers of encounters did not correlate with written tests scores or clinical evaluations. Moreover, Martin et al8 found no association between self-reported clinical experience and OSCE performance for final-year medical students. Poisson et al9 hypothesized that larger numbers of patient encounters may in fact be detrimental to the student as it reduces the amount of time spent studying, resulting in less satisfactory outcomes. Similarly, a study conducted in England demonstrated a negative relationship between increased number of bedside teaching sessions and student performance in the OSCE.4 In contrast, a study by Kim and Myung3 identified a weak positive correlation between number of patient encounters and student performance, especially in physical examinations.

Nevertheless, these studies are challenged by a major confounding factor that may have not been taken into consideration: the impact of feedback. Constructive, timely feedback positively influences clinical competence.10,11 A study by Chatenay et al5 further supported this hypothesis by demonstrating that the quality of feedback, rather than the volume of clinical experience, influenced student outcome scores; immediate feedback given after students presented their assessment had a positive effect. This reinforces the concept that the volume of clinical encounters is of limited value to student education without appropriate guidance and constructive feedback. While the quality of feedback was not measured directly in our study, our logbook assessments mandated that students log cases that were discussed with clinical tutors, who were instructed to provide feedback following these encounters.

Furthermore, in all of these studies, correlations were made with in-hospital clinical encounters. In contemporary medical education, the presence of additional avenues for learning, that have not previously been available, should also be considered. These pathways of learning may act to compensate for limited bedside teaching and numbers of clinical encounters. The use of novel strategies, such as simulation labs, online lectures, and instructional videos, to consolidate skills, in conjunction with independent study, may better enhance student preparation for OSCEs and eventually clinical competence.4

Interestingly, further analysis of our data revealed an inverse correlation between the number of clinical tutors reported in logbooks and exam scores. These results are interpreted to indicate that frequent changes in clinical tutor may negatively influence student progression and suggest that consistency of tutors facilitates knowledge advancement.

The aim of our study was not to make the logbook redundant; however, we propose that logbooks should be redesigned with a more positive purpose. While the logbook in its current form might not be an optimal assessment tool, Patil and Lee12 encourage the use of such logbooks as a means of interaction between the student and tutors, encouraging immediate feedback on learning objectives, activities, and a means for continuous assessment during rotation.

This study is limited by its retrospective design, as well as the expected presence of reporting bias by students. We did not account for the specific diagnoses logged and relied only on the numbers of cases logged. In addition, we did not account for cases that were not discussed with clinical tutors or those that students may have seen independently or accompanied by resident clinicians. We believe that logbooks still have value in assisting to regulate the medical student clerkship and help to ensure a uniform educational experience. Future research should identify the essential elements required to produce a high-yield clinical encounter that can be used to predict student performance.

Conclusion

Traditionally, logbooks have provided a means of attempting to standardize the way in which medical education is delivered and evaluated. Overall, our study found no correlation between the volume of self-reported clinical encounters and exam scores. In addition, an inverse correlation between the number of tutors per student and exam scores was identified. These findings indicate the need for reevaluation of the way logbook data is entered and used as an assessment tool.

Author contributions

All authors contributed toward data analysis, drafting and revising the paper and agree to be accountable for all aspects of the work.

Disclosure

The authors report no conflicts of interest in this work.

References

1.

Huang GC, Almeida JM, Roberts DH. Reaching the limits of mandated self-reporting: clinical logbooks do not predict clerkship performance. Med Teach. 2012;34(3):e185–e188.

2.

Denton GD, DeMott C, Pangaro LN, Hemmer PA. Narrative review: use of student-generated logbooks in undergraduate medical education. Teach Learn Med. 2006;18(2):153–164.

3.

Kim JY, Myung SJ. Could clinical experience during clerkship enhance students’ clinical performance? BMC Med Educ. 2014;14:209.

4.

Jolly BC, Jones A, Dacre JE, Elzubeir M, Kopelman P, Hitman G. Relationships between students’ clinical experiences in introductory clinical courses and their performances on an objective structured clinical examination (OSCE). Acad Med. 1996;71(8):909–916.

5.

Chatenay M, Maguire T, Skakun E, Chang G, Cook D, Warnock GL. Does volume of clinical experience affect performance of clinical clerks on surgery exit examinations? Am J Surg. 1996;172(4):366–372.

6.

Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. BMJ. 1975;1(5955):447–451.

7.

Zayyan M. Objective structured clinical examination: the assessment of choice. Oman Med J. 2011;26(4):219–222.

8.

Martin IG, Stark P, Jolly B. Benefiting from clinical experience: the influence of learning style and clinical experience on performance in an undergraduate objective structured clinical examination. Med Educ. 2000;34(7):530–534.

9.

Poisson SN, Gelb DJ, Oh MF, Gruppen LD. Experience may not be the best teacher: patient logs do not correlate with clerkship performance. Neurology. 2009;72(8):699–704.

10.

Dolmans DH, Gijselaers WH, Moust JH, de Grave WS, Wolfhagen IH, van der Vleuten CP. Trends in research on the tutor in problem-based learning: conclusions and implications for educational practice and research. Med Teach. 2002;24(2):173–180.

11.

Van Hell EA, Kuks JB, Raat AN, Van Lohuizen MT, Cohen-Schotanus J. Instructiveness of feedback during clerkships: influence of supervisor, observation and student initiative. Med Teach. 2009;31(1):45–50.

12.

Patil NG, Lee P. Interactive logbooks for medical students: are they useful? Med Educ. 2002;36(7):672–677.

Creative Commons License © 2018 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.