Back to Journals » Advances in Medical Education and Practice » Volume 14

Assessment Practices of Learning Outcomes of Postgraduate Students in Biomedical and Pharmaceutical Sciences at College of Health Sciences at Addis Ababa University: Student and Faculty Perspectives

Authors Shibeshi W , Baheretibeb Y

Received 22 March 2023

Accepted for publication 26 June 2023

Published 3 July 2023 Volume 2023:14 Pages 693—706

DOI https://doi.org/10.2147/AMEP.S412755

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3

Editor who approved publication: Dr Md Anwarul Azim Majumder



Workineh Shibeshi,1 Yonas Baheretibeb2

1Department of Pharmacology and Clinical Pharmacy, College of Health Sciences, Addis Ababa University, Addis Ababa, Ethiopia; 2Department of Psychiatry, College of Health Sciences, Addis Ababa University, Addis Ababa, Ethiopia

Correspondence: Workineh Shibeshi, Email [email protected]

Introduction: Higher education institutions are under increasing pressure to respond to societal needs which has in turn led to changes in the type of knowledge, competencies, and skills required from learners. Assessment of student learning outcomes is the most powerful educational tool for guiding effective learning. In Ethiopia, studies are scarce on assessment practices of learning outcomes of postgraduate students in biomedical and pharmaceutical sciences.
Objective: This study investigated the assessment practices of learning outcomes of postgraduate students pursuing studies in biomedical and pharmaceutical sciences at the College of Health Sciences of Addis Ababa University.
Methods: A quantitative cross-sectional study was conducted using structured questionnaires administered to postgraduate students and teaching faculty members in 13 MSc programs in biomedical and pharmaceutical sciences at the College of Health Sciences of Addis Ababa University. About 300 postgraduate and teaching faculty members were recruited with purposive sampling. The data collected included assessment methods, types of test items, and student preferences on assessment formats. Data were analyzed using quantitative approaches, descriptive statistics, and parametric tests.
Results: The study indicated that several assessment strategies and test items were practiced without a significant difference across fields of study. Regular attendance, oral questioning, quiz, group and individual assignments, seminar presentations, mid-term tests, and final written examination were commonly practiced assessment formats, while short question and long question essays were the most commonly used test items. However, students were not commonly assessed for skills and attitude. The students indicated they mostly preferred short essay questions, followed by practical-based examinations, long essay questions, and oral examination. The study identified several challenges to continuous assessment.
Conclusion: Practice of assessing students’ learning outcomes involves multiple methods focusing on assessing mainly knowledge; however, the assessment of skills appears inadequate, and several challenges appear to be hindering implementation of continuous assessment.

Keywords: assessment practice, learning outcomes, postgraduate education, health professional education

Introduction

Higher education institutions are under increasing pressure to respond to emerging societal needs.1 Assessment is known to support teaching-learning processes, and guides processes aimed at the attainment of applied competencies by learners. The assessment of students’ progress continuously is also a mechanism to maintain the quality of higher education.2 The function of assessment in higher education includes monitoring student progress, providing feedback on learning, accountability, certification of learners’ achievement, supporting learning and teaching, and informing instruction and curriculum.3–5

Healthcare educational training occurs in several changing contexts which impact the learning of students and their professional practice.6 Graduates should demonstrate a good understanding of complex issues based on sound assessment strategies. Thus, assessment reform has emerged as an academic response to the demands of the health science professions and the need to equip graduates with the necessary competencies. Effective assessment in health science education requires tasks that assess cognitive, psychomotor, and affective domains.7

The issue of educational assessment measures is a global concern.8 Students' capacity for learning and engagement appears to be enhanced with the appropriate curricular designs; however, this does not always happen in practice because it depends on methods of assessment practiced and feedbacks given to students.9

The assessment of learning outcomes especially in postgraduate students needs to test higher-order cognitive skills.10 The ability of assessments to recognize and reward a student’s performance in these skills is essential. Tests which only reward the recall of knowledge, concepts, and routine techniques are not fit for purpose within the scopes of postgraduate education. Assessments need to be performed continuously in the process of teaching-learning.

Modules in competency-based curriculum are designed to forge logical links between learners' needs, aims, learning outcomes, resources, learning and teaching techniques, and strategies and assessment criteria.11 The active learning and continuous assessment are pillars in the instructional process of modular approaches to teaching;12 however, research findings in Ethiopia indicate that the assessment activities at present are largely summative, assess the development of lower-level cognition, and do not contribute to improvements in students' learning.13 Assessment is also one way of securing quality in higher education.2

The Ethiopian higher education institutions curricula were shifted to a modular approach of learning where program modules are designed focused on competence to determine the workload of students, and to consolidate the student-centered teaching.14 The rationale for introduction of modular curriculum in the Ethiopian higher education system was that modularization combines advantages of performance objectives, self-pacing, active learning, and continuous feedback, and identifies key competencies such as vocational and professional skills, job-specific skills, and transferable skills.10 Addis Ababa University has introduced the modular approach of teaching in graduate programs since 2010 as a policy, and assessment strategies are clearly described in curricula as formative (continuous assessment) with weight of 60% and summative assessment with weight of 40% to be implemented for assessment of learning outcomes. Training was given to the academic community before implementation of modular curriculum, but formal studies that assess proper implementation of assessment practices are limited.

In Ethiopia few studies were conducted in undergraduate programs of public universities to evaluate assessment practices, implementation challenges of modularization, and continuous assessment,12,13,15–19 and only one study has been conducted in postgraduate programs on assessment.10 However, studies on assessment practices of postgraduate student learning outcomes in Ethiopian higher educational system are limited, and no study has been published on assessment of learning outcomes of postgraduate students in biomedical and pharmaceutical sciences in Ethiopia. Therefore, the goal of this study was to evaluate assessment practices (using both student and faculty experiences) of student learning outcomes in postgraduate programs in biomedical and pharmaceutical sciences; the specific aims of study were: to evaluate assessment strategies and test items practiced, to evaluate understanding and interpretation of continuous assessment, to explore the challenges to implementation of continuous assessment, and to identify students’ preference on assessment methods and test items.

Methods

Study Setting

This study was conducted in 7 academic departments and 13 graduate programs in both the School of Pharmacy and School of Medicine of the College of Health Sciences of Addis Ababa University. In 2009, the School of Pharmacy and School of Medicine, and other health sciences programs merged to form the College of Health Sciences of Addis Ababa University. The departments that offer postgraduate courses include the Department of Pharmacology & Clinical Pharmacy, Department of Pharmaceutics & Social Pharmacy, Department of Pharmaceutical Chemistry & Pharmacognosy, Department of Anatomy, Department of Physiology, Department of Biochemistry, and Department of Microbiology and Immunology.

The Study Design and Population

A cross-sectional descriptive study was conducted in postgraduate students and teaching faculty at the College of Health Sciences of Addis Ababa University. A quantitative survey was employed to explore student and faculty perspectives regarding assessment practices. The study population comprised postgraduate students enrolled in Master of Science (MSc) programs in biomedical and pharmaceutical sciences in the College of Health Sciences of Addis Ababa University including pharmacology, pharmacy practice, pharmaceutics, pharmacoepidemiology and social pharmacy, pharmaceutical analysis and quality assurance, pharmacognosy, regulatory sciences, health supply chain management, anatomy, biochemistry, physiology, microbiology, and parasitology.

The study population also included faculty teaching postgraduate courses in MSc programs.

Sample Size Determination and Sampling Techniques

The total number of postgraduate students in 13 postgraduate programs at the time of this study was 300, and the total number of teaching faculty members was 57. Therefore, a convenient sampling of all available students attending MSc courses and all faculty involved in teaching of postgraduate courses in biomedical and pharmaceutical sciences at College of Health Sciences of Addis Ababa University was considered. Consideration of all respondents was justified to partially compensate for expected non-response rate in the era of COVID-19 pandemic during which instructions were shifted to virtual platforms.

Data Collection and Management

Data were collected using structured questionnaires. The questionnaires for data collection were prepared based on the literature review.10,20–22 The questionnaires were reviewed by experts and pre-tested and validated on a small number of respondents (n=24), after which adjustments were made to improve the clarity and appropriateness of the questionnaire items. The questionnaires were administered to postgraduate students and teaching faculty. Data collected include demographic data of students and faculty, various themes of assessment including assessment strategies, test types, awareness of continuous assessment, challenges to implementation of continuous assessment, and preference of assessment methods.

For opinions, views, and attitudes of respondents about assessment that need agreement to a given statement or question, a 5-point Likert scale (ranging from 1=strongly disagree to 5=strongly agree) was designed, while for questionnaire statements that focus on frequency responses of practice of assessment, the questionnaire items were designed using a six-point frequency response Likert-type scale (ranging from 1=never to 6=always).

The data were collected from a diverse range of respondents such as postgraduate students from 13 academic disciplines, and teaching faculty with diverse qualifications and experience of teaching which could triangulate the findings.

Operational Definitions

Assessment is defined as any act of interpreting information about student performance, collected through any means, while the term evaluation is used to mean the process of arriving at judgments using assessment information.

Continuous assessment (formative assessment) is defined as a periodic and systematic method of assessing and evaluating a person’s attributes with feedback given to students.

Data Analysis

Data were cleaned, coded, entered, transformed, categorized, and analyzed using SPSS version 25. The data in this study are qualitative in nature, obtained from subjective opinions, attitudes, and perceptions on questions asked and derived from a Likert scale but analyzed using quantitative methods. In this study we aimed to combine the items in order to generate a composite score (Likert scale) of a set of items for different participants, then defined the scale as an interval scale, thus used mean and standard deviations as measures of central tendency and dispersion, and parametric tests such as independent sample t-test and one way analysis of variance (ANOVA) were used to test differences between means. For variables significant in ANOVA analysis, Fisher LSD post-hoc multiple comparisons were performed. The reliability of items in the questionnaire was checked using Cronbach’s-alpha reliability coefficient. The results indicated that alpha was ranging from 0.7 to 0.9, indicating items were intercorrelated and had acceptable internal consistency. Data were described as frequencies, percentages, and mean (standard deviations). Significance was declared at p-value less than 0.05.

The study data were collected using questionnaire designed using Likert scale rating for opinions, frequencies, and attitudes and perceptions. The Likert scale is applied as one of the most fundamental and frequently used psychometric tools in educational and social sciences research,23 though it is also subject to debates and controversies on the analysis of data. The Likert scale was developed to measure attitudes, and the typical Likert scale is a 5- to 7-point ordinal scale used by respondents to rate the degree to which they agree or disagree with a statement.24 Literature informs two schools of thought on the analysis of item response in Likert scale: some consider the Likert scale as an ordinal scale, and recommend use of median and mode as measures of central tendency, and interquartile range as measures of dispersion, and use of non-parametric test; while the other school of thought considers the Likert scale as an interval scale and recommends use of mean and standard deviation as well as parametric tests for data analysis.23,25 However, evidence from both real and simulated data indicates that parametric tests tend to give the correct answer even when assumptions of a normal distribution of data are violated, so parametric tests are sufficiently robust to yield largely unbiased answers that are acceptably close to “the truth” when analyzing Likert scale responses.26

Results

Sociodemographic Data and Disciplines of Study

This study explored assessment practices of learning outcomes of postgraduate students in various academic disciplines by considering opinions and views of both student and faculty members. A total of 231 out of 300 graduate students completed the questionnaire, and 43 faculty members completed the questionnaire out of 57, giving the total response rate of 76.8% (274/357). However, 20 questionnaires from graduate students were incompletely filled and were thus excluded from analysis. The sociodemographic and enrollment data of respondents are indicated in Table 1 and Table 2. The majority of student respondents were male (79.6%) and in the age group 20–30 years (55.2%).

Table 1 The Sociodemographic Data of Study Participants

Table 2 The Frequency Distribution of Respondents by Academic Disciplines

Assessment Strategies and Practices

This study indicated that assessment strategies employed by instructors (as reported by students), were taking regular attendance, oral questioning in the classroom, quiz, group and individual assignments, seminar presentations, mid-term test, and final written examination, which were the most common strategies of assessment of learning outcomes; however, observation on practical tasks, and self- and peer-assessments were never practiced (Table 3). The findings were also reproducible as reported by faculty (Table 4). There was no statistically significant difference in assessment strategies across academic disciplines based on both student and faculty responses (p > 0.05 in both cases).

Table 3 Types of Assessment Strategies Practiced in Postgraduate Programs as Reported by Students

Table 4 The Types of Assessment Strategies Practiced in Postgraduate Students as Reported by Faculty

Investigation on the types of assessment items (test types) shows that short essay and long essay questions were predominantly practiced to assess students’ learning outcomes, while individual and group seminar presentations were often used; however, multiple choice questions (MCQ), true–false questions, and computational problems were rarely used. Other types of assessment such as oral examination, practical-based examinations, extended matching items, laboratory reports, and portfolios were never practiced according to students' responses (Table 5). Findings from faculty respondents indicated similar trends for the types of assessment items practiced, and other items like oral examination, laboratory reports, practical exams, and case-based exams were also never practiced (Table 6). There were statistically significant differences (F=3.67, p=0.00) in assessment items across academic disciplines based on students’ opinion.

Table 5 The Types of Assessment Items Used by Instructors in Postgraduate Programs as Reported by Students

Table 6 The Types of Assessment Items Practiced Among Postgraduate Students as Reported by Faculty

The respondents were also asked how they understand and interpret continuous assessment (CA). Almost in all items asked, students indicated their agreement with an overall item scale mean of 3.76 indicating the right interpretation and attitude on continuous assessment. However, 36.7% of students have a misconception that continuous assessment can be administered until the student achieves passing grades (Table 7). Similarly, faculty indicated their correct understanding and interpretation of a continuous assessment with an overall item scale mean of 3.94. Similarly to students, about 30% of faculty also believe that CA can be administered until the student achieves passing grades (Table 8). The study participants were questioned about the optimum number of assessment tasks per module for deciding the status of the student on the assessment of learning outcomes. Both students and faculty converged on 4 assessment tasks as an optimum.

Table 7 The Graduate Students’ Understanding and Interpretation of the Continuous Assessment

Table 8 The Understanding and Interpretation of Continuous Assessment as Reported by Teaching Faculty

In this study, current assessment strategies and test items were investigated through interviewing both students and faculty. Additionally, students were invited to provide their opinion on the choice of their preferred assessment items or methods. The results (Table 9) indicated that most students agreed on assessment items in the order of preference indicated by item means: short essay questions, practical-based examinations, long essay questions, oral examinations, and multiple choice questions. This finding is interesting, given that about 80% and 70% of students affirmed practical-based examinations and oral examinations, respectively, as their preferences even if these formats were not represented in current assessment practices. There was no significant difference (F=1.489; p=0.13) across disciplines in preferred method of assessment types, and there was no statistically significant difference between genders in the preferred choices of assessment types (t=0.791, p=0.430).

Table 9 Students' Preference on Assessment Methods and Test Items

This study also enquired about the challenges to implementation of continuous assessment in postgraduate teaching. The students believed that shortage of instructional time, inadequate teaching and learning materials, large content of courses, lack of information communication technology facilities, lack of support from academic administrators, and lack of specific manuals and guidelines for assessment are challenges to implementation of continuous assessment (Table 10). Faculty responses indicated that they have the same belief as student respondents (Table 11). Large content of courses was the biggest challenge agreed by both students and faculty. There was no significant difference across academic disciplines in challenges for implementation of continuous assessment according to student respondents; however, faculty respondents indicated statistically significant differences (p=0.013).

Table 10 The Challenges to Implementation of Continuous Assessment in Graduate Teaching as Reported by Graduate Students

Table 11 The Challenges to Implementation of Continuous Assessment in Graduate Teaching as Reported by Faculty

Discussion

The present study was conducted with the goal of evaluating assessment practices (using both student and faculty experiences) of student learning outcomes in postgraduate programs in biomedical and pharmaceutical sciences and several specific aims. The findings will serve to initiate more medical education research that will have relevance to improve learning, teaching, and assessment practices among postgraduate programs in Ethiopia and other low-resource settings. The study addressed several aspects of assessment practice including assessment strategies, types of test items, student preferences on assessment tools, interpretation and understanding of continuous assessment, and challenges for implementation of continuous assessment.

The study on current assessment practices in the study setting indicated that various formats of assessment are used by instructors to assess learning outcomes of graduate students; the main methods of assessment are taking regular attendance, oral questioning in classroom, quiz, group and individual assignments, seminar presentations, mid-term test, and final written examinations; however, practical tasks and newly introduced innovative methods such as portfolios, self- and peer-assessment, and simulations27 were not practiced in the study setting. The practical tasks were not used as an assessment tool; this may be related partly to lack of advanced laboratories and consumables that suit postgraduate teaching for the conduct of experiments or demonstration during instruction of modules which may be paralleled by the lack of application of the same for assessment purposes. Studies recommend validation that assessment methods for learning outcomes are adequate predictors of future performance of graduates. A focus should be placed on formative assessment, such as peer-assessment and self-assessment, which has been historically neglected in the medical sciences but which may be partly the answer to increasing cohort sizes in recent years.7 Collaborative learning models are one of the approaches to teaching in modular postgraduate education, but peer-assessment was not currently practiced in the study setting. However, the importance of peer-assessment is clearly understood if we want to test the learner’s ability to collaborate, communicate, assess, and give and receive feedback, all of which are essential parts of healthcare professionals which can be then assured by peer-assessments.28 Assessment for learning by means of formative assessment in medical education can completely change the learning process in postgraduate students.29 More recently, portfolios, self- and peer-assessment, simulations, and other innovative methods were introduced in higher educational contexts.

Similarly, investigations on the practice of types of tests administered as assessments showed that short essay questions and long essay questions were predominantly practiced to assess students’ learning outcomes, while individual and group seminar presentations were often used; however, MCQ, true–false questions, and computational problems were rarely used. Oral examination, practical-based examinations, extended matching items, laboratory reports, and portfolios were also never practiced, which clearly indicates deficits in assessment of skills among graduate students. Short and long essays and MCQ test mainly knowledge of learners. Studies recommend a carefully balanced combination of test items to comprehensively reflect the assessment blueprint. The assessment methods commonly used in both undergraduate and postgraduate medical education are multiple choice questions (MCQ), extended matching questions, essay questions, objective structured clinical examinations (OSCE), and oral assessment.7 The predominant use of short essay and long essay questions in the present study may indicate the true practice of faculty to focus on assessment of knowledge and understanding of subject matter and neglect of assessment of the skills component of learning. Our observation is further strengthened by the lack of practice of oral and practical examinations and laboratory reports in this study.

There was no significant difference in the assessment strategies and types of test items across postgraduate disciplines of study, which may be related to similarity of disciplines i.e. all are basic science fields in health sciences that may practice nearly similar approaches of teaching-learning and assessment. The findings have some comparability to the study reported by Chalchisa.10 The assessment methods should match the competencies being learnt and the teaching formats being used, while multiple assessment methods are necessary to measure students’ knowledge and skills.30 Studies indicate that both knowledge and skills can be tested using current assessment methods, but attitude, comprising teamwork, professionalism, and communication skills, is more difficult to assess.7

Modular curricula were designed and implemented in Addis Ababa university with the objective of training graduates with job-specific skills and competencies. For effective transfer of learning using modular approach, educational design needs to focus on activation of existing knowledge, engaging with new information, demonstrating competence and application in the real world.31 Studies indicate that continuous assessment is poorly implemented for various reasons.12,13,15 We tried to assess student and faculty attitudes on the interpretation of continuous assessment. Our findings indicate that both students and faculty positively agreed to all items indicating their positive awareness on continuous assessment; however, more than 30% of both students and faculty have a misconception that continuous assessment can be administered until the student achieves passing grade. This contradicts the main purpose of continuous assessment, which is driving teaching-learning through use of assessment results, with less focus on student achievement, so the finding informs the necessity of intervention at both student and faculty levels. The study suggests that in order to maximize the benefit of the modular curriculum, the basic principles of modularization should be perceived well by both students and instructors.2

The challenges to implementation of continuous assessment in postgraduate teaching were evaluated. Both the students and faculty agreed that the shortage of instructional time, inadequate teaching and learning materials, large content of courses, lack of information communication technology facilities, lack of support from academic administrators, and lack of specific manuals and guidelines for assessment were challenges to implementation of continuous assessment. These challenges are also typically reported in other low-resource settings.32–36 The curricular designs need to consider fulfillment of educational inputs, standards and policies, and human and material resources before launching academic programs. One way of securing quality education is by assessing students’ progress continuously and filling the gaps observed in their skills based on the results of the assessment.2

In Addis Ababa University, there is not any guidance on the standard number of assessment tasks to be administered to graduate students that is aligned to the time given for students' learning of a particular module. In this regard the respondents were asked to suggest an optimum number of assessment tasks; both students and faculty members recommended four tasks as an optimum number of assessment tasks to assess student achievement against course level learning outcomes. This finding should be aligned to mitigate challenges to the implementation of continuous assessment given scarcity of resources in the study setting.

In this study, students were invited to give their opinion on the choice of their preferred assessment items or methods. The results indicated that most students agreed on assessment items in the order of preference indicated by highest item means: short essay questions, practical-based examinations, long essay questions, oral examinations, and multiple choice questions. This finding is an interesting one in that about 80% and 70% of students also affirmed practical-based examinations and oral examinations, respectively, as their preferences even if current assessment practice indicated their absence. The investigation of students’ assessment preference has gained increased attention due to understanding factors that drive the learning process and its outcomes.3 Students’ preferences of assessment methods reflect their perception of the learning environment, their learning conceptions, and their approaches to learning, i.e. students’ preferred assessment requirements are strongly related with their approaches to learning.27 In this study preference of students for short essay questions, long essay questions, practical-based examinations, oral examinations in that order may indicate approaches of postgraduate learning practiced and students' desire for achievement of learning goals. Assessments using multiple choice questions motivate students towards surface approaches of learning, while open, essay-type questions encourage them to pursue deep understanding of the subject and the achievement of long-term knowledge.37 The study on learning approaches suggests a positive shift towards deep and strategic learning in postgraduate students,38 which infers that assessments in postgraduate studies need to be designed in alignment with such new learning style shifts. If different groups of students favor different assessment types but achieve similar learning outcome, knowing that preference can clearly inform faculty decisions regarding which types of assessment to plan.39 In this regard, alignment and balancing of assessment with students’ preferences and expectations may be recommended.

Conclusion

The study indicated that several assessment strategies and test items were practiced in biomedical and pharmaceutical graduate programs in Addis Ababa University without a significant difference across fields of study based on the opinion of both students and faculty. Short question and long question essays were not only the most prevalently practiced forms of assessment but also the most preferred ones. Practical-based examinations and oral examinations were also identified as top preferred assessment forms. However, assessment tools for measuring practical skills and attitude were not adequately used as evidenced from lack of practical-based assessments like observations, lab reports and portfolios, peer-assessment and self-assessment. Students and faculty appear to have adequate understanding and interpretation of continuous assessment; however, there are several challenges identified against its implementation including large content of courses, lack of information communication technology facilities, lack of teaching-learning materials, shortage of instructional time, and lack of administrative support and manuals and guidelines.

Ethics Statement

This study was conducted after getting ethical clearance from Centre for Health Sciences Education, College of Health Sciences, Addis Ababa University (Ref: HSE/07/2014, Feb 13, 2022). The study participants were recruited following proper written consent. The personal identifiers were not included in the questionnaire, and the privacy and confidentiality of data from study participants were protected.

Acknowledgment

The authors thank data collectors and study participants.

Disclosure

The authors report no conflicts of interests in this work.

References

1. Beets P. Towards integrated assessment in South African higher education. In: Higher Education in South Africa. Stellenbosch: Sun Press; 2017.

2. Tedla YG, Desta MT. The suitability of the modular curriculum to offer/learn skill in EFL undergraduate classes. Int J Curr Res. 2015;7(4):14686–14696.

3. Holzinger A, Lettner S, Steiner-Hofbauer Van Melser MC, Capan Melser M. How to assess? Perceptions and preferences of undergraduate medical students concerning traditional assessment methods. BMC Med Educ. 2020;20(1):312. doi:10.1186/s12909-020-02239-6

4. Ghaicha A. Theoretical framework for educational assessment: a synoptic review. J Educ Pract. 2016;7(24):212–231.

5. Abeywickrama P. Rethinking traditional assessment concepts in classroom-based assessment. CATESOL J. 2011;23(1):205–213.

6. Mugimu CB, Mugisha WR. Assessment of learning in health sciences education: MLT case study. J Curric Teach. 2017;6(1):21. doi:10.5430/jct.v6n1p21

7. Shaughnessy SMO, Joyce P. Summative and formative assessment in medicine: the experience of an anaesthesia trainee. Int J Higher Educ. 2015;4(2). doi:10.5430/ijhe.v4n2p198

8. Abdullahi OE, Onasanya SA. Challenges facing the administration of educational assessment measures at the secondary school level in Nigeria. J Appl Sci. 2010;10(19):2198–2204. doi:10.3923/jas.2010.2198.2204

9. Ferris H, Flynn DO. Assessment in medical education; what are we trying to achieve? Int J Higher Educ. 2015;4(2). doi:10.5430/ijhe.v4n2p139

10. Chalchisa D. Practices of assessing graduate students’ learning outcomes in selected Ethiopian Higher Education Institutions. J Int Cooperat Educ. 2014;16(2):157–180.

11. Sadiq S, Zamir S. Effectiveness of modular approach in teaching at university level. J Educ Pract. 2014;5(17):103–109.

12. Dejene W, Chen D. The practice of modularized curriculum in higher education institution: active learning and continuous assessment in focus. Cogent Educ. 2019;6(1):1611052. doi:10.1080/2331186X.2019.1611052

13. Weldmeskel FM. The Use of Quality Formative Assessment to Improve Student Learning in West Ethiopian Universities [dissertation]; 2015.

14. Chaka D. Practices of EFL Modular Instruction: The Case of Undergraduate Program of English Language and Literature [Ph.D. Thesis]. Addis Ababa: Addis Ababa University; 2016.

15. Seifu WG. Assessment of the implementation of continuous assessment: the case of Mettu university. Eur J Sci Math Educ. 2016;4(4):534‐544.

16. Moges B. The implementations and challenges of assessment practices for students’ learning in public selected Universities, Ethiopia. Univ J Educ Res. 2018;6(12):2789–2806. doi:10.13189/ujer.2018.061213

17. Olamo TG, Mengistu YB, Dory YA. Challenges hindering the effective implementation of the harmonized modular curriculum: the case of three public universities in Ethiopia. Creat Educ. 2019;10(7):1365–1382. doi:10.4236/ce.2019.107102

18. Abera G, Kedir M, Beyabeyin M. The implementations and challenges of continuous assessment in public universities of Eastern Ethiopia. Int J Instruct. 2017;10(4):109–128. doi:10.12973/iji.2017.1047a

19. Berhe T, Embiza S. Problems and prospects of implementing continuous assessment at Adigrat University. J Educ Pract. 2015;6(4):19–26.

20. Mussawy SAJ. Assessment practices: student’s and teachers’ perceptions of classroom assessment. Master’s Capstone Projects; 2019.

21. Garratt-Reed D, Roberts LD, Heritage B. Grades, student satisfaction and retention in online and face-to-face introductory psychology units: a test of equivalency theory. Front Psychol. 2016;7. doi:10.3389/fpsyg.2016.00673

22. Sofroniou S, Premnath B, Poutos K. Capturing student satisfaction: a case study on the national student survey results to identify the needs of students in STEM related courses for a better learning experience. Educ Sci. 2020;10(12):378. doi:10.3390/educsci10120378

23. Joshi A, Kale S, Chandel S, Pal DK. Likert scale: explored and explained. Curr J Appl Sci Technol. 2015;7(4):396–403.

24. Sullivan GM, Artino AR. Analyzing and interpreting data from likert-type scales. J Grad Med Educ. 2013;5(4):541–542. doi:10.4300/JGME-5-4-18

25. Harpe SE. How to analyze Likert and other rating scale data. Curr Pharm Teach Learn. 2015;7(6):836–850. doi:10.1016/j.cptl.2015.08.001

26. Norman G. Likert scales, levels of measurement and the “laws” of statistics. Adv Health Sci Educ Theory Pract. 2010;15(5):625–632. doi:10.1007/s10459-010-9222-y

27. Struyven K, Dochy F, Janssens S. Students’ perceptions about evaluation and assessment in higher education: a review. Assess Eval High Educ. 2005;30(4):325–341. doi:10.1080/02602930500099102

28. Boud D, Cohen R, Sampson J. Peer learning and assessment. Assess Evaluat Higher Educ. 1999;24(4):413–426. doi:10.1080/0260293990240405

29. Sharma S, Sharma V, Sharma M, Awasthi B, Chaudhar S. Formative assessment in postgraduate medical education ‑ Perceptions of students and teachers. Int J App Basic Med Res. 2015;5(4):S66–S70. doi:10.4103/2229-516X.162282

30. Tabish SA. Assessment methods in medical education. Int J Health Sci. 2008;2(2):3–7.

31. Botma Y, Van Rensburg GH, Coetzeec IM, Heyns T. A conceptual framework for educational design at modular level to promote transfer of learning. Innovat Educ Teach Int. 2015;52(5):499–509. doi:10.1080/14703297.2013.866051

32. Asiyai RI. Challenges of quality in higher education in Nigeria in the 21st century. Int J Educ Plan Admin. 2013;3(2):159–172.

33. Osadebe PU. Evaluation of continuous assessment practice by university lecturers. Int J Evaluat Res Educ. 2015;4(4):215–220.

34. Bichi AA, Musa A. Assessing the correlation between continuous assessment and examination scores of education courses. Am Int J Res Human Arts Soc Sci. 2015;2015:15–391.

35. Igomu AC, Solomon IAO. Imperatives of innovative assessment practices for sustainable development in Nigeria. J Econom Sustain Dev. 2015;6(11):259–268.

36. Kitula PR, Ogoti EO. Effectiveness of implementing continuous assessments in Tanzanian Universities. Int J Contemp Appl Res. 2018;5(7):1–8.

37. Entwistle N, Tait H. Approaches to learning, evaluations of teaching, and preferences for contrasting academic environments. High Educ. 1990;19(2):169–194. doi:10.1007/BF00137106

38. Samarakoon L, Fernando T, Rodrigo C, Rajapakse S. Learning styles and approaches to learning among medical undergraduates and postgraduates. BMC Med Educ. 2013;13(1):4. doi:10.1186/1472-6920-13-42

39. Aldrich RS, Trammell BA, Poli S, et al. How age, gender, and class format relate to undergraduate students’ perceptions of effective course assessments. InSight. 2018;13:18–129.

Creative Commons License © 2023 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.