Back to Journals » Advances in Medical Education and Practice » Volume 9

Matching medical student achievement to learning objectives and outcomes: a paradigm shift for an implemented teaching module

Authors Atta IS , AlQahtani FN

Received 3 December 2017

Accepted for publication 7 February 2018

Published 9 April 2018 Volume 2018:9 Pages 227—233

DOI https://doi.org/10.2147/AMEP.S158784

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Md Anwarul Azim Majumder



Ihab Shafek Atta,1 Fahd Nasser AlQahtani2

1Pathology Department, Faculty of Medicine (Assuit Branch), Al-Azhar University, Cairo, Egypt; 2Radiology Department, Faculty of Medicine, Albaha University, Al-Aqiq, Saudi Arabia

Introduction: Low student achievement in a basic imaging module was the impetus for an assessment of the module.
Methods: A valid, reliable, and structured Likert scale was designed to measure the degree of student satisfaction with the domains of the module, including learning objectives (LO), teaching strategy and tools (TT), assessment tools (AT), and allotted credit hours (CH). Further analysis was conducted of student dissatisfaction to determine the subdomain in which module improvement was to be implemented. Statistical analysis of data among Likert scale domains was conducted.
Results: Likert scale data showed the TT domain to be the major reason for low student achievement. Statistical studies revealed 57/117 students (48.6%) were dissatisfied with TT, compared with LO 16/117 (13.6%), AT 54/117 (46.1%), and CH 12/117 (10.2%). Significant P-values were obtained for LO vs TT (P<0.0001), LO vs AT (P<0.0001), LO vs CH (P<0.03), TT vs CH (P<0.0001), and AT vs CH (P<0.0001). No significant difference was observed between TT and AT (P<0.29). Regarding TT, 41/117 (34.9%) students were dissatisfied with lectures (L) compared to hospital-based teaching (HPT) 24/117 (20%), problem-based learning (PBL) 8/117 (6.8%), self-directed learning (SDL) 3/117 (2.5%), and seminars (S) 4/117 (3.4%). Significant P-values were obtained for L vs HPT (P<0.0001), L vs PBL (P<0.0001), L vs SDL (P<0.0001), L vs S (P<0.0001), HPT vs PBL (P<0.002), HPT vs SDL (P<0.0001), and HPT vs S (P<0.0001). Regarding lecture modifications, student satisfaction was 78.3% compared to 52% before modification. A significant P-value (P<0.0001) was obtained between Likert scale domains before and after modification. Lecture modification resulted in a good student response and satisfaction.
Conclusion: The major reason for low student achievement was the teaching tools, particularly the lectures. Major modifications to lectures improved student achievement. The students and most of the teaching staff were highly satisfied with the modifications, which provided for reciprocal discussion and interaction. These results should encourage and guide other medical schools to investigate the points of weakness in their curriculum.

Keywords: teaching strategy, teaching tools, learning objectives, interactive lecture, student performance, radiology teaching, curriculum reform, curriculum evaluation, radiology lecture

Introduction

The integration-based curriculum of the Albaha School of Medicine includes an interesting module, the basic imaging module (course) that is instructed at Phase II, level VII (first semester of the fourth academic year). The faculty for the module is a committee composed of experts from radiology, pathology, and medical education. The learning objectives (LO) for the module were formulated in a standardized manner and subject to the standard criteria of Specific, Measurable, Attainable, Relevant, and Timely (SMART).

The time allocated for module implementation is 3 weeks in duration for a total of three credit units. The skeleton of the module consists of 20 lectures (L), two problem-based learning (PBL) sessions, two self-directed learning (SDL) sessions, four hospital-based teaching (HPT) sessions of 4 hours’ duration each, and two seminars (S). Student performance was assessed by several tools: a quiz, a clinical exam, an objective structured clinical examination (OSCE), and a final written exam.

Upon review, the basic imaging module was found to be highly documented. In this context, documentation refers to the inclusion of all module domains: aim and goals, LO and outcomes, teaching strategy and tools, assessment tools (TT), evaluation and feedback tools (AT). These module standards were found to be fully compatible with the basic standards of documentation.1

The module was taught to three cohorts of students during phase II, level VII (first semester of the fourth academic year). Student achievement for this module underperformed in comparison to other courses offered during that phase and level. The reasons for this low achievement were investigated.

A series of questions were asked: what is the problem with this module? Are the criteria for achievement too difficult? Are the criteria for achievement appropriate and the difficulty level reasonable? Are the examinations valid and reliable? Are the educational goals clear and consistent with the number of credit hours (CH) for the module? Are the methods of teaching consistent with the acquisition of knowledge and understanding by the students? How does student attendance compare to other courses? In general, is the educational environment commensurate with and does it conform to an ideal for implementation of the module?

Answers to these questions should identify the main reasons for student underperformance and should provide a starting point by which to comprehensively reform the module. A committee was formed to investigate and answer these questions. The committee was charged to analyze and identify module weakness, and then introduce changes that would improve delivery and outcomes.2,3

It is important to note that with regard to examinations, a comprehensive review of all examinations was performed by medical education experts and compared to module LO; the review included item analysis, and a use of a difficulty index and discrimination index. The results showed that all examinations conform to unit quality standards.

Aim of the work

The aim of this study was to investigate the strengths and weaknesses of the basic imaging module and to explain the reasons for the underachievement of students in this course, to identify weaknesses, and then to strengthen those weaknesses. This study is an ideal start for a universal periodic evaluation of the curriculum.

Materials and methods

This study was conducted after obtaining permission from the Quality and Accreditation Unit of the College Agency for Quality Affairs of Albaha School of Medicine, Albaha University, Saudi Arabia. The written approval of all participating students was obtained.

The first part of this study was conducted with a cohort of 117 students. Their achievement in the module was compared to their achievement in other courses during the same phase and level. Student achievement was defined by the student’s grade (A+, A, B+, B, C+, C, D+, D, and F). Low student performance was identified when the number of students attaining high grades ≥B+ was low or zero and the number of failed students was high in comparison to other courses during the same academic level and year.

A well-formed, valid, structured questionnaire was designed by a committee composed of members of the radiology and pathology departments with the collaboration of medical education staff experts. Questions were formulated and revised thoroughly by the educational experts to provide questionnaire validity. A pilot study was conducted on two separate groups: one group was comprised of junior staff members and the other group of 60 level IV students. Results for both groups were similar, confirming that the questionnaire was reliable. The questionnaire was distributed to 117 students and was designed to measure the levels of acceptance and satisfaction among students regarding the structure and components of the basic imaging module.

The main components of the module consist of several domains: LO, TT, andAT. Each of these domains was further subdivided into several subdomains. For example, teaching tools is the domain and its components are L, HPT, and PBL as subdomains. The questionnaire used was a five-item Likert scale,47 which measured the degree of student satisfaction in the domains of the module. The scale ranged from 5 to 1 (strongly satisfied to strongly dissatisfied). The students marked their level of satisfaction by circling a point on the scale. Qualitatively, student comments were also permitted.

In the second part of the study, lecture modification was introduced after three student iterations. Hence, this modification was implemented upon the fourth and fifth iterations (on a total of 60 students in the following university years). A module committee including radiology, pathology, and educational experts analyzed current lectures in several ways: examining a selection of lecture PowerPoint presentations from the teaching staff, in-class expert attendance, documented observations and comments by the student audience and the committee.

Twenty lectures were evaluated and found to be of a traditional type with no active learning. The lectures were completely teacher-centered with little student understanding of the content. Student attendance was very low.

Based on this evaluation, the committee set criteria and guidelines for the start of a qualitative and radical shift in lectures that was to activate the remainder of the module and to provide a time plan for evaluation of the whole curriculum.

Change was initiated by reducing the number of lectures to no more than 10, with a focus on LO for basic science, principles of imaging techniques, and ethics. Second, interactive lecture tools were introduced, including discussion, question asking, division of students into small groups, periodic summaries, and the formation of succinct handouts. The purpose of these was to increase the interaction between teachers and students, make the lectures more attractive and interesting to students, increase the transfer of knowledge through communication, and to raise the level of understanding and information absorption by students.

Several workshops were held for all faculty involved in teaching the module. These workshops introduced problems associated with traditional lectures and the importance of more interactive and informative lectures.

It was noted that most faculty were enthusiastic about change and the implementation of such change in other courses. In contrast, some faculty showed a lack of conviction and satisfaction. Those faculty were elderly and they argued that the change did not suit students of medicine and was time-consuming, with educational objectives too numerous to be fully covered by the module.

The workshop modifications as well as the use of the Likert scale were implemented on a cohort of 60 students representing the fourth and fifth iterations. The results were compared with the 117 students of the first three iterations.

The main statistical analysis was conducted using the independent Student’s t-test. A one-way analysis of variance for global comparisons of all domains was also conducted. SPSS version 17 (SPSS Inc., Chicago, IL, USA) was used for this study. P-values were considered significant if ≤0.05.

Results

Data analysis of the 117 students showed low student achievement in the basic imaging module relative to other courses in the same phase and level (Figure 1). Analysis of Likert scale results showed that student dissatisfaction was primarily with TT. Statistical analysis found that 57/117 (48.6%) were dissatisfied with TT, compared to LO 16/117 (13.6%), AT 54/117 (46.1%), and CH 12/117 (10.2%). Significant P-values were obtained for LO vs TT (P<0.0001), LO vs AT (P<0.0001), LO vs CH (P<0.03), TT vs CH (P<0.0001), and AT vs CH (P<0.0001). No significant difference was found between TT and AT (P<0.29) (Table 1).

Figure 1 Students’ achievement in the basic imaging module in relationship to other modules at the same phase and level.

Note: A–D and F represent grades.

Table 1 Likert scale results for domains of the basic imaging module

Note: Data presented as number (percentage) and mean ± SD of students and score.

Abbreviation: ANOVA, analysis of variance.

Regarding TT, 41/117 (34.9%) of students were dissatisfied with L compared to HPT 24/117 (20%), PBL 8 (6.8%), SDL 3 (2.5%), and S 4/117 (3.4%). Significant P-values were obtained for L vs HPT (P<0.0001), L vs PBL (P<0.0001), L vs SDL (P<0.0001), L vs S (P<0.0001), HPT vs PBL (P<0.002), HPT vs SDL (P < 0.0001), and HPT vs S (P<0.0001) (Table 2).

Table 2 Likert scale results for teaching tools

Note: Data presented as number (percentage) and mean ± SD of students and score.

Abbreviations: ANOVA, analysis of variance; HPT, hospital-based training; PBL, problem-based learning; SDL, self-directed learning.

Regarding lecture modification, student satisfaction after modification was 78.3% compared to 52% before modification. A significant P-value (P<0.0001) was obtained between Likert scale domains before and after modification (Table 3).

Table 3 Degree of student satisfaction before and after lecture modification

Note: Data presented as number (percentage) and mean ± SD of students and score.

In addition to the statistics of Tables 1 and 2, strongly satisfied and satisfied items of the Likert scale, when added together, formed a scale of satisfaction opposite to the dissatisfaction scale (neglecting the “neutral” item) as previously described.8 The P-values obtained from a global test for Tables 1 and 2 by independent Student’s t-test was significant (P<0.0001 and P<0.009, respectively).

Student comments were objectives are too long to be fully understood and not suitable for undergraduates, some teaching tools were time-consuming and of little benefit, the OSCE was too difficult for undergraduates, the module needs more CH to be fully delivered, and the time allocated was not LO applied.

Discussion

Many programs have been developed for the study of radiology, but most of those are suitable for residency and postgraduates and are not appropriate for undergraduates. Moreover, most of these programs are not generalizable because they are subject to educational goals and learning outcomes, which in turn depend on the vision and mission of each medical school.9

Low student achievement was the reason for the investigation of the module. The first step of the investigation was a request for student feedback. It is worth noting that at the end of each course there are required evaluations but the results of these evaluations are rarely accurate, with the majority of students answering posed questions without reading and understanding the content of the questions. Student reactions to these questions are influenced by a number of factors, including time consumption by the questionnaire, the degree of difficulty of the assessment, and factors related to student–teacher relationships.

In this study, students were asked to read and answer all questionnaires thoroughly and write comments if needed. Regarding the satisfaction of the students, a high percentage of students were strongly satisfied regarding LO while the majority of students were strongly dissatisfied regarding TT. This suggests that there are many factors that interfere with the delivery of knowledge that have a negative effect on student achievement, compromising intended learning outcomes. Hence, the periodic evaluation of any module or curriculum must be powerfully put into practice and must start from the assessment of outcomes, which should be directly proportionate to student achievement.

One means by which to improve student learning is assessment of outcomes. Outcomes assessment enables faculty to determine what students know and can do as a result of instruction. This information can be used to supply faculty and academic departments with the means by which to enhance instruction, course content, and curricular structure. Furthermore, faculty and institutions can use outcomes data analysis to demonstrate the proficiency of graduates to forthcoming students, college administrators, parents, employers, accreditation organizations, and representatives.2

Further, another way to improve teaching is to provide individual faculty members, particularly in their first years of teaching, with enduring formative feedback from both students and colleagues.10,11 Supportive and regular feedback from students permits midcourse adjustments in several areas such as organization, methods of education, and the introduction of methods to enhance the learning process. It is worth noting that Marsh and Roche12 concluded that feedback gathered at the end of a course had a greater long-term impact on improved teaching than did midcourse evaluation.

In this study, the timing of outcomes assessment and student achievement was at the end of the course, as suggested by Marsh and Roche.12 We found that the major weakness was the method of teaching, either strategy or tools; hence, we selected this domain for the paradigm shift that would enhance student achievement and learning outcomes.

We found a highly significant difference among domains, with the most important being TT (Table 1). With subsequent analysis of teaching tools, we found the most important subdomain to be L (Table 2).

Lecture modification had the greatest positive impact on student satisfaction (Table 3) (Figure 2). These results are consistent with those of Nyhsen et al13 who found that basic information is required to facilitate valuable discussion and that some of this basic knowledge can be competently presented in lectures. Nyhsen et al14 found that medical students have a preference for interactive discussions rather than other teaching methods.

Figure 2 Students’ satisfaction in the teaching tool subdomains.

Abbreviations: HPT, hospital-based training; PBL, problem-based learning; SDL, self-directed learning.

Similar observations were made by Camozzi et al15 who studied the effect of 10 “chalk and blackboard interactive workshops” conducted by expert pediatricians. The attendants were a diverse group of students and residents. The groups met over a 2-day period and discussed 10–15 cases. The role of the expert pediatricians was to promote reasoning, provide supporting information, and correct misstatements. Emphasis was placed on history-taking and examination in a stimulating environment. After a period of more than 3 months, all 37 attendees were asked to evaluate the workshop in comparison to a recent teaching lecture; 30/37 rated the workshop as excellent or above average. The P-value was highly significant between the rating of the workshop and that of lecture-based teaching.

Zou et al16 found that the majority of students prefer teaching by interactive dialogue in small groups, which promotes questions and explanations. In radiology teaching, Maleck et al17 reported that interactive-based teaching provides greater student enjoyment and concentration with significantly better learning outcomes. Ochoa and Wludyka18 recommended that interactivity enhances student motivation as well as stimulating their ability to think critically.

The significance of enhanced teaching and student performance has a major impact after student graduation. Branstetter et al19,20 showed that students acquire a better understanding of radiology when training is received in preclinical years, which allows for better understanding after graduation. Such students are more likely to request appropriate diagnostic tests when they become clinicians, improving patient care as well as the connection between radiologist and clinician. Furthermore, students may be more likely to prefer radiology as an elective, a research topic, or a vocation. 9,21,22

The strengths of this study were assessment of real issues with real students and their actual achievement, implementation of improvements that had measurable effects and, finally, the use of a Likert scale that provided quantifiable results.

There are limitations to this study. Student assessment for the module is based on a quiz, final examination, OSCE, and clinical exam. We only assessed modifications that impacted the L and HBT subdomains. Hence, other components of the module were not evaluated. This is considered to be the weak point of the study. Student performance was measured as a whole after all changes had been implemented. Therefore, interim performance and outcomes were not assessed. Also, these results are limited to one specific medical school and to one specific region and country, as well as to one specific year of medical school. Therefore, the results may not be generalized to other “medical students” or to “residents.”

Conclusion

A decline in student achievement was the reason for evaluation of this module. The major reason for the decline, as assessed by a Likert scale, was teaching tools, particularly the lectures. Major changes implemented made the lectures more valuable and worthwhile. The students were highly satisfied with the modifications, which included reciprocal discussion and interaction. This approach may encourage and guide other medical schools to investigate and mitigate weaknesses in their curriculum.

Disclosure

The authors report no conflicts of interest in this work.

References

1.

Au W. Teaching under the new Taylorism: high-stakes testing and the standardization of the 21st century curriculum. Journal of Curriculum Studies. 2011;43:25–45.

2.

Wang KH, Wang TH, Wang WL, Huang SC. Learning styles and formative assessment strategy: enhancing student achievement in Web-based learning. J Comput Assist Learn. 2006;22:207–217.

3.

Banta TW, Suskie L, Walvoord BE. Three assessment tenors look back and to the future. Assess Update. 2015;27(1):3–15.

4.

Boynton PM, Greenhalgh T. Selecting, designing, and developing your questionnaire. BMJ. 2004;328;1312–1315.

5.

Boynton PM. Administering, analysing, and reporting your questionnaire. BMJ. 2004;328(7452):1372–1375.

6.

Rattray J, Jones MC. Essential elements of questionnaire design and development. J Clin Nurs. 2005;16: 234–243.

7.

Derrick B, White P. Comparing two samples from an individual Likert question. Int J Mathematics and Stat. 2017;18(3): 1–13.

8.

Lava SA, Simonetti GD, Bianchetti AA, Ferrarini A, Bianchetti MG. Prevention of vitamin D insufficiency in Switzerland: a never-ending story. Int J Pharm. 2013;457(1):353–356.

9.

Dawes TJ, Vowler SL, Allen CM, Dixon AK. Training improves medical student performance in image interpretation. Br J Radiol. 2004;77:775–776.

10.

Gunderman RB, Alexander S, Jackson VP, Lane KA, Siddiqui AR, Tarver RD. The value of good medical student teaching: increasing the number of radiology residency applicants. Acad Radiol. 2000;7:960–964.

11.

Nadeem N, Zafar AM, Ahmed MN. Instituting an undergraduate core clerkship in radiology: initial experiences in Pakistan. J Pak Med Assoc. 2009;59:170–173.

12.

Marsh H, Roche L. Making students’ evaluations of teaching effectiveness effective: the critical issues of validity, bias, and utility. Am Psychol. 1997;52(11):1187–1197.

13.

Nyhsen CM, Steinberg JL, O’Connell JE. Undergraduate radiology teaching from the student’s perspective. Insights Imaging. 2013;4(1):103–109.

14.

Nyhsen CM, Lawson C, Higginson J. Radiology teaching for junior doctors: their expectations, preferences and suggestions for improvement. Insights Imaging. 2011;2(3):261–266.

15.

Camozzi P, Faré PB, Lavagno C, et al. Italo-Swiss “Chalk and blackboard interactive 2-day workshop”-participants feedback. Ital J Pediatr. 2015;41:60.

16.

Zou L, King A, Soman S, et al. Medical students’ preferences in radiology education a comparison between the Socratic and didactic methods utilizing powerpoint features in radiology education. Acad Radiol. 2011;18:253–256.

17.

Maleck M, Fischer MR, Kammer B, et al. Do computers teach better? A media comparison study for case-based teaching in radiology. Radiographics. 2001;21:1025–1032.

18.

Ochoa JG, Wludyka P. Randomized comparison between traditional and traditional plus interactive Web-based methods for teaching seizure disorders. Teach Learn Med. 2008;20(2):114–117.

19.

Branstetter BF 4th, Faix LE, Humphrey AL, Schumann JB. Preclinical medical student training in radiology: the effect of early exposure. AJR Am J Roentgenol. 2007;188(1):W9–W14.

20.

Branstetter BF 4th, Humphrey AL, Schumann JB. The long-term impact of preclinical education on medical students’ opinions about radiology. Acad Radiol. 2008;15(10):1331–1339.

21.

Gunderman RB, Alexander S, Jackson VP, Lane KA, Siddiqui AR, Tarver RD. The value of good medical student teaching: increasing the number of radiology residency applicants. Acad Radiol. 2000;7(11):960–964.

22.

Roubidoux MA, Packer MM, Applegate KE, Aben G. Female medical students’ interest in radiology careers. J Am Coll Radiol. 2009;6(4):246–253.

Creative Commons License © 2018 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.