Back to Journals » Advances in Medical Education and Practice » Volume 8

Competency-based tool for evaluation of community-based training in undergraduate medical education in India – a Delphi approach

Authors Shewade HD, Jeyashree K, Kalaiselvi S, Palanivel C, Panigrahi KC

Received 4 October 2016

Accepted for publication 27 January 2017

Published 10 April 2017 Volume 2017:8 Pages 277—286

DOI https://doi.org/10.2147/AMEP.S123840

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 4

Editor who approved publication: Dr Md Anwarul Azim Majumder



Video abstract presented by Kathiresan Jeyashree

Views: 643

Hemant Deepak Shewade,1,2 Kathiresan Jeyashree,3 Selvaraj Kalaiselvi,4 Chinnakali Palanivel,5 Krishna Chandra Panigrahi,2

1Department of Operational Research, International Union Against Tuberculosis and Lung Disease (The Union), South-East Asia Office, New Delhi, 2Department of Community Medicine, Indira Gandhi Medical College and Research Institute, Puducherry, 3Department of Community Medicine, Velammal Medical College Hospital and Research Institute, Madurai, 4Department of Community Medicine, Pondicherry Institute of Medical Sciences, 5Department of Preventive and Social Medicine, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India

Introduction: A community-based training (CBT) program, where teaching and training are carried out in the community outside of the teaching hospital, is a vital part of undergraduate medical education. Worldwide, there is a shift to competency-based training, and CBT is no exception. We attempted to develop a tool that uses a competency-based approach for assessment of CBT.
Methods: Based on a review on competencies, we prepared a preliminary list of major domains with items under each domain. We used the Delphi technique to arrive at a consensus on this assessment tool. The Delphi panel consisted of eight purposively selected experts from the field of community medicine. The panel rated each item for its relevance, sensitivity, specificity, and understandability on a scale of 0–4. Median ratings were calculated at the end of each round and shared with the panel. Consensus was predefined as when 70% of the experts gave a rating of 3 or above for an item under relevance, sensitivity, and specificity. If an item failed to achieve consensus after being rated in 2 consecutive rounds, it was excluded. Anonymity of responses was maintained.
Results: The panel arrived at a consensus at the end of 3 rounds. The final version of the self-assessment tool consisted of 7 domains and 74 items. The domains (number of items) were Public health – epidemiology and research methodology (13), Public health – biostatistics (6), Public health administration at primary health center level (17), Family medicine (24), Cultural competencies (3), Community development and advocacy (2), and Generic competence (9). Each item was given a maximum score of 5 and minimum score of 1.
Conclusion: This is the first study worldwide to develop a tool for competency-based evaluation of CBT in undergraduate medical education. The competencies identified in the 74-item questionnaire may provide the base for development of authentic curricula for CBT.

Keywords: competency-based education, questionnaire design, Delphi technique, community medicine, community education, India

Introduction

Community-based training (CBT) program is a vital part of undergraduate medical education (UGME) where teaching and training are carried out in the community outside the teaching hospital.13 In India, CBT is managed by the Department of Community Medicine or Preventive and Social Medicine. CBT is offered from the first year of Bachelor of Medicine and Bachelor of Surgery (MBBS) with the objective of orienting the students to community-based health care services. Through CBT, students are trained in all 4 core disciplines of community medicine: family medicine, epidemiology, health promotion, and health management.4

In a developing country like India where the predominant section of population is in the villages and suburban areas, teaching in tertiary care hospitals alone does not equip the students with skills essential to work in the community. They have to be trained to work at all levels of health care delivery system. The Reorientation of Medical Education scheme, though not as successful as it was conceived to be, is one of the notable attempts to deliver CBT effectively.5,6

In the UGME scenario worldwide, there is a shift toward competency-based training, and the same is also recommended by an expert group commissioned by the World Health Organization.7,8 Medical Council of India (MCI), the apex body which regulates medical education in India, has in its Vision 2015 document recommended a shift toward competency-based approach.9 The change in the approaches to teaching also necessitates a change in the assessment methods used.10

The National Health Mission in India emphasizes the need for competent health care providers in rural areas. Unlike secondary or tertiary care systems, medical officers working under primary care system do not have the opportunity/privilege to work under experienced health care team. They have to be equipped with skills in clinical judgment and administration of the health center. Apart from this, they have to train their team of paramedical workers and frontline workers in the community. In this case, it is the responsibility of the medical education system to ensure that the candidates have acquired these essential competencies before they graduate and venture into the community on their own.

Currently, there are no competency-based assessment tools available for CBT. We attempted to develop a tool that uses competency-based approach for assessment of CBT. In this paper, we describe the development of a 74-item competency-based questionnaire using Delphi technique. The psychometric properties of the 74-item questionnaire and the development of an abridged 58-item self-assessment questionnaire using exploratory factor analysis are described elsewhere.11

Methods

Study setting

In India, medical graduation or MBBS is covered over 4 and a half years (9 semesters) followed by a year of internship. The subject community medicine is taught since first year till 7th semester through theory sessions and CBT. There are 3 clinical postings under CBT where the undergraduate students are posted in rural/urban health training centers each for 4 weeks. In addition to clinical postings, students also undergo Family Health and Advisory Programme during which they follow up a family (in the community they serve) allotted to them through weekly home visits. The students appear for a final theory and clinical/practical examination in community medicine at the end of the 7th semester. Then, the students are posted in the community as part of Compulsory Rotatory Residential Internship (CRRI) for a period of 2 months where they are expected to practice all the competencies gained.

Questionnaire development – Delphi technique

Based on a review on competencies to be acquired under a CBT program, we developed a conceptual framework using 6 core competencies after adapting it to Indian context and prepared a preliminary list of major domains with items under each domain.12 While deciding on the items, we considered the exposure of the student to the competency during his/her training and the practical requirement of the competency in his/her day-to-day practice of community medicine in future. We had designed it as a self-rated questionnaire where the student will be required to rate his/her competencies. The rating scale for each item was a Likert scale ranging from “much above average” (score=5) to “much below average” (score=1).

We used Delphi technique to develop the questionnaire and arrive at a consensus.1316 We purposively selected a panel of experts (n=8) in community medicine. The prerequisites were that the expert should have a minimum of 3 years’ experience in health service provision and/or teaching and training and/or research, post his/her postgraduation (MD) in community medicine. The list of members who constituted the expert panel is presented in Table 1. The principal investigator (HDS) facilitated the process but was not part of the panel. Email was the medium of communication with the experts. All experts were aware of the list of experts constituting the panel, but anonymity of the responses was maintained.

Table 1 Panel of experts for consensus building (Delphi technique) to develop competency-based self-assessment questionnaire in community-based training program


Abbreviation: SN, serial number.

Each Delphi round spanned over 2 weeks. During the 1st week, the Delphi panel experts gave their comments. During the 2nd week, the facilitator compiled the comments. In the light of the comments, the questionnaire was revised and recirculated among the experts.

In round 1, we shared the preliminary draft of the questionnaire prepared by us and a rating sheet. Experts rated each item (close-ended response) in part I of the rating sheet and gave suggestions (open-ended) in part II. The facilitator requested the experts to rate each item (rating scale was between 0 [poor] and 4 [good]) under the heads of relevance, sensitivity, specificity, and understandability (Box 1). We sought suggestions to improve understandability, language of items, and wording. Comments on adequacy and any additional items for inclusion were also welcomed. We requested the experts to also suggest alternate options for the response scale used in the questionnaire. At the end of each round, the median ratings received by each of the items were shared with the experts. The comments were also anonymously shared.

Box 1 Operational definition of the terms used by the Delphi panel to rate each item in the questionnaire: sensitivity, specificity, relevance, and understandability.

The iterative process continued up to the consensus point. Consensus was predefined as when 70% of the experts gave a rating of 3 or above for each item under relevance, sensitivity, and specificity (content validity). An item that did not achieve consensus was allowed to be rated again by the experts in the light of the compiled ratings and open-ended comments at the end of the previous round. If an item failed to achieve consensus after being rated in 2 consecutive rounds, it was excluded. Any item that achieved consensus was not allowed for rating again in the consecutive rounds. Where they had given poor rating for items under the head “understandability”, we requested the experts for suggestions to improve the same.

Pretesting

The draft of the questionnaire at the end of Delphi process was shared with 3 students (2 female, 1 male). He spent 45 minutes with each discussing the relevance; adequacy of concepts, language, and responses; understandability; and any other difficulties he/she faced in interpreting the questionnaire.

Data entry and analysis

The scores assigned to each item under the 4 criteria were entered into Microsoft Excel. Median ratings were calculated at the end of each round. The extent of consensus arrived at the end of each round was calculated as the percentage of experts assigning a score of 3 or more under each characteristic of an item.

Ethics

The Institute Research Committee of Indira Gandhi Medical College and Research Institute (IGMCRI), Puducherry, approved the study. Written informed consent via email was obtained from each of the Delphi panel experts before including them in the panel.

Results

Questionnaire development

All eight experts participated in each round of Delphi. The preliminary draft of the questionnaire had 6 domains namely “Public health”, “Family medicine”, “Cultural competence”, “Community development and advocacy”, “Research and evidence-based practice”, and “Generic competence” and 81 items.

The details of the modification of the tool in each of the Delphi rounds are presented in Table 2. At the end of round 1, 59 out of the 81 items had attained consensus. Nineteen new items were suggested. So, a total of 41 items were presented for rating by the expert group in the second round along with the compiled ratings and open-ended comments (blinded) from the previous round. There was rearrangement of items under some domains and rephrasing of certain items as suggested by the experts. Two domains namely “Public health” and “Research and evidence-based practice” were regrouped to form 3 domains namely “Public health – epidemiology and research methodology”, “Public health – biostatistics”, and “Public health administration at PHC level”.

Table 2 Details of the modification of the tool in each of the Delphi rounds


Note: *Two domains, namely “Public health” and “Research and evidence-based practice” were regrouped to form 3 domains namely “Public health – epidemiology and research methodology”, “Public health – biostatistics”, and “Public health administration at primary health center level”.


Abbreviation: NA, not applicable.

At the end of round 2, 4 out of the 22 items rated for the second time achieved consensus leading to deletion of the remaining 18 items. Out of the 19 newly added items, 10 had achieved consensus. The remaining 9 were presented for re-rating in round 3. No new items were added in round 2. The response scale was changed to a modified form of Miller’s response scale as suggested by the Delphi panel. The final response scale was an adaptation of the Miller’s triangle to assess competency: “don’t know”, “know”, “know how”, “show how”, and “do”.17 Miller’s triangle, which may be applied as a part of Objective Structured Clinical Examination (OSCE) for rater assessment, was adapted and used in this self-assessment questionnaire. Each item in the questionnaire under all domains except “Generic competence” had this response scale. The domain “Generic competence” had the following response scale: “strongly agree”, “agree”, “neither agree nor disagree”, “disagree”, and “strongly disagree”.

At the end of round 3, only 1 out of the 9 items had achieved consensus. The remaining 8 items were excluded. Like in round 2, no new items were added during round 3. Hence, the Delphi process came to a conclusion at the end of 3 rounds.

The final questionnaire prepared after completion of Delphi process is presented in Figure S1. All excluded items that did not achieve consensus are listed in Table S1. The questionnaire had 7 domains and 74 items. The domains (number of items) were “Public health – epidemiology and research methodology” (13), “Public health – biostatistics” (6), “Public health administration at PHC level” (17), “Family medicine” (24), “Cultural competencies” (3), “Community development and advocacy”(2), and “Generic competence” (9). Each item was given a maximum score of 5 and minimum score of 1. Higher score indicated better skill score for each item in the questionnaire. Hence, maximum and minimum possible scores for a student were 370 and 74, respectively.

There were no significant changes made to the questionnaire after pretesting with the students.

Discussion

This is the first study from India and worldwide to develop a tool for competency-based evaluation of CBT in UGME. Drawing from an existing conceptual framework, we designed the preliminary draft of the questionnaire. Though various inventories of competency classifications are available,1826 we have based our tool on a conceptual framework specific to CBT.12 This draft evolved through 3 rounds of Delphi into the final version composed of 7 domains and 74 items.

Appropriateness of Delphi technique

With increasing use of Delphi to address different research questions, there have been variants of the classical Delphi, thus necessitating the term “Delphi techniques” or the “Delphi approach”.27,28 Fink et al have prescribed a clear “decision trail” as one of the key goodness criteria to judge the credibility of the evidence generated by Delphi.29

Delphi technique is considered as a structured way of assessing and synthesizing/combining human judgment. Rowe et al state that Delphi can be used when the researcher is convinced that the technique will generate more accurate assessments and judgments compared to that provided by individuals.30 Delphi technique has also been used in UGME scenario worldwide.3133 We resorted to Delphi technique given that our objective was to develop a comprehensive tool for competency-based evaluation of CBT.

Delphi technique limits the inhibition that the participants may face in other informal group situations by promising anonymity. Thus, the technique encourages the expert to offer his/her frank and candid opinion(s) which is termed as “process gain”.30 We are quite sure that we would not have been able to achieve this using any other technique given the issues of seniority, interfering or inhibiting personality traits that are quite evident in other face-to-face meetings of experts.

We understand that Delphi does not offer a fool-proof solution to these issues. But it does circumvent these to a great extent. Hence, we had chosen Delphi as one of the steps in the development of the tool. Delphi was preceded by the use of a conceptual framework to develop a preliminary draft derived from review of literature in the field. After Delphi, the tool was informally discussed with students to obtain their feedback on the tool. This was followed by psychometric analysis to assess the validity and reliability of the tool, the results of which are reported elsewhere.

Recruitment of Delphi panel

We recruited a heterogeneous group of experts from across the country to contribute toward the development of the same. We aimed to draw from all their knowledge and experience while also achieving consensus. Given that there is no prescribed minimum size of the panel, we recruited 8 experts.13

Data collection procedures

We had adhered to all the 4 essential prerequisites of a Delphi technique namely anonymity of participants, iteration, controlled feedback, and statistical aggregation of the results of the rounds. Our panelists could modify their judgment based on feedback without being influenced by others in the group.13,30 Although anonymity is reported to cause a lack of accountability of one’s views, we believe that it is not a major drawback in our study as the outcome of this study has direct application and relevance to the practice of experts themselves who were chosen based on their experience and expertise.

We encouraged qualitative feedback also, so as to not restrict the experts to rating the existing items in the tool. This qualitative feedback enabled us to reclassify certain items under the domains, rephrase certain items and domain names, add relevant items, and delete redundant items or items beyond the scope of the tool.

Means of implementation

The competencies identified in the 74-item questionnaire may provide the base for development of authentic curricula for CBT. In India, community medicine is taught from 1st semester to 7th semester: the period is divided as preclinical (1st–2nd semester), para-clinical (3rd–5th semester), and clinical (6th–7th semester). Competencies pertaining to CBT may accordingly be divided over the preclinical, para-clinical and clinical training periods, with clinical competencies under CBT covered in the 6th and 7th semester. Practice of these competencies, under supervision, is expected during the CRRI period. The tool may find its best application at the end of the CRRI period, though it may also be administered at the end of the 7th semester.

Of the 74 competencies in 74-item questionnaire, 41 competencies (55%) were pertaining to “Family medicine” or “Public health administration at PHC level”. This draws our attention to the primary domains of focus for faculty of community medicine–family medicine and community health administration.34 It is their primary role to impart knowledge and skills in these domains to the students, and faculties need to be sensitized and reoriented in this regard.3537

Competency-based CBT is likely to face challenges in terms of curricula design, faculty training, student assessment, and systematic institutional change, all of which require sustained, long-term commitment.38

Limitations

There were some limitations. This is a self-rated questionnaire which captures a student’s perception about his/her competencies. Currently, self-assessment or rating does not figure in UGME scenario in India. We propose to develop an instructor-/teacher-rated version of this tool which can be routinely employed in tandem with the existing assessment methods.

Conclusion

This tool can be seen as the sign of entering into the realm of competency-based assessment in community-based UGME in India. It is a valuable addition to the existing assessment methods in India and can guide experts in a need-based design of curriculum and teaching/training methodology.

Acknowledgments

The authors thank the Department for International Development (DFID), UK, for funding the Global Operational Research Fellowship Programme at the International Union Against Tuberculosis and Lung Disease (The Union), Paris, France, in which HDS works as an operational research fellow. The study was conducted using available resources; therefore, no separate budget was required. The authors thank the Department for International Development (DFID), UK, and La Fondation Veuve Emile Metz-Tesch (Luxembourg) for funding this open-access publication. The funders had no role in the design or conduct of the study. The contents of this paper do not necessarily reflect the views of the participating institutions or The Union.

Disclosure

The authors report no conflicts of interest in this work.

References

1.

Deutsch SL, Noble J, editors. Community-Based Teaching: A Guide to Developing Education Programs for Medical Students and Residents in the Practitioner’s Office. Philadelphia, PA: ACP Press; 1997.

2.

World Health Organization (WHO). Community-based education of health personnel: report of a WHO study group. WHO technical report series 746. Geneva: World Health Organization; 1987.

3.

Magzoub ME, Schmidt HG. A taxonomy of community-based medical education. Acad Med. 2000;75(7):699–707.

4.

Kumar R. Development of community medicine sub-specialities. Indian J Community Med. 2005;30(2):43.

5.

Dongre AR, Deshmukh PR, Gupta SS, Garg BS. An evaluation of ROME Camp: forgotten innovation in medical education. Educ Health (Abingdon). 2010;23(1):363.

6.

Talapalliwar M. ROME - putting theory into practice: an experience from a medical college of Maharashtra. Health Agenda. 2014;2(2):74–76.

7.

Long DM. Competency-based residency training: the next advance in graduate medical education. Acad Med. 2000;75(12):1178–1183.

8.

WHO Regional Office for South East Asia. Improving the Teaching of Public Health at Undergraduate Level in Medical Schools – Suggested Guidelines: Report of a Review Meeting of the Expert Group, Kathmandu, Nepal, 10-12 August 2010. New Delhi: WHO Regional Office for South East Asia; 2011. Available from: http://apps.searo.who.int/PDS_DOCS/B4674.pdf. Last accessed March 13, 2017.

9.

Medical Council of India (MCI). Vision 2015. New Delhi: Medical Council of India; 2011.

10.

Torbeck L, Wrightson AS. A method for defining competency-based promotion criteria for family medicine residents. Acad Med. 2005;80(9):832–839.

11.

Shewade HD, Jeyashree K, Kalaiselvi S, Palanivel C, Panigrahi KC. Assessment of community-based training of medical undergraduates: development and validation of a competency-based questionnaire. Education for Health. In press 2017.

12.

Ladhani Z, Scherpbier AJ, Stevens FC. Competencies for undergraduate community-based education for the health professions – a systematic review. Med Teach. 2012;34(9):733–743.

13.

Linstone HA, Turoff M. Delphi Method: Techniques and Applications. Reading, MA: Addison-Wesley Publishing Company; 1975.

14.

Cuhls K. Delphi method. 2000. Available from: http://www.unido.org/fileadmin/import/16959_DelphiMethod.pdf. Accessed January 23, 2017.

15.

Hsu CC, Sandford BA. The Delphi technique: making sense of consensus. Pract Assess Res Eval. 2007;12(10):1–8.

16.

Yousuf MI. Using experts’ opinions through Delphi technique. Pract Assess Res Eval. 2007;12(4):1–8.

17.

Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 Suppl):S63–S67.

18.

Hicks PJ, Schumacher D, Guralnick S, Carraccio C, Burke AE. Domain of competence: personal and professional development. Acad Pediatr. 2014;14(2 Suppl):S80–S97.

19.

Guralnick S, Ludwig S, Englander R. Domain of competence: systems-based practice. Acad Pediatr. 2014;14(2 Suppl):S70–S79.

20.

Ludwig S. Domain of competence: professionalism. Acad Pediatr. 2014;14(2 Suppl):S66–S69.

21.

Benson BJ. Domain of competence: interpersonal and communication skills. Acad Pediatr. 2014;14(2 Suppl):S55–S65.

22.

Burke AE, Benson B, Englander R, Carraccio C, Hicks PJ. Domain of competence: practice-based learning and improvement. Acad Pediatr. 2014;14(2 Suppl):S38–S54.

23.

Englander R, Cameron T, Ballard AJ, Dodge J, Bull J, Aschenbrener CA. Toward a common taxonomy of competency domains for the health professions and competencies for physicians. Acad Med. 2013;88(8):1088–1094.

24.

Englander R, Carraccio C. Domain of competence: medical knowledge. Acad Pediatr. 2014;14(2 Suppl):S36–S37.

25.

Schumacher DJ, Englander R, Hicks PJ, Carraccio C, Guralnick S. Domain of competence: patient care. Acad Pediatr. 2014;14(2 Suppl):S13–S35.

26.

Frank JR, Danoff D. The CanMEDS initiative: implementing an outcomes-based framework of physician competencies. Med Teach. 2007;29(7):642–647.

27.

Rowe G, Wright G. The Delphi technique: past, present, and future prospects – introduction to the special issue. Technol Forecast Soc Change. 2011;78(9):1487–1490.

28.

Powell C. The Delphi technique: myths and realities. J Adv Nurs. 2003;41(4):376–382.

29.

Fink A, Kosecoff J, Chassin M, Brook RH. Consensus methods: characteristics and guidelines for use. Am J Public Health. 1984;74(9):979–983.

30.

Rowe G, Wright G, Bolger F. Delphi: a reevaluation of research and theory. Technol Forecast Soc Change. 1991;39(3):235–251.

31.

Almoallim H. Determining and prioritizing competencies in the undergraduate internal medicine curriculum in Saudi Arabia. East Mediterr Health J. 2011;17(8):656–662.

32.

Syme-Grant J, Stewart C, Ker J. How we developed a core curriculum in clinical skills. Med Teach. 2005;27(2):103–106.

33.

Kiessling C, Dieterich A, Fabry G, et al. Communication and social competencies in medical education in German-speaking countries: the Basel consensus statement. Results of a Delphi survey. Patient Educ Couns. 2010;81(2):259–266.

34.

Gouveia EA, Braga TD, Heráclio SA, Pessoa BHS. Validating competencies for an undergraduate training program in rural medicine using the Delphi technique. Rural Remote Health. 2016;16(4):3851.

35.

Shewade HD, Palanivel C, Jeyashree K. Training medical undergraduates in the core disciplines of community medicine through community postings – an experience from India. Fam Med Community Health. 2016;4(3):45–50.

36.

Shewade HD, Jeyashree K, Chinnakali P. Reviving community medicine in India : the need to perform our primary role. Int J Med Public Health. 2014;4(1):29–32.

37.

Dath D, Iobst W. The importance of faculty development in the transition to competency-based medical education. Med Teach. 2010;32(8):683–686.

38.

Harris P, Snell L, Talbot M, Harden RM. Competency-based medical education: implications for undergraduate programs. Med Teach. 2010;32(8):646–650.

Supplementary materials

Figure S1 The 74-item competency-based self-assessment questionnaire for assessing community-based training of undergraduate medical students


Abbreviations: AFB, acid fast bacillus; IFA, iron folic acid; L, laboratory; NCD, non-communicable disease; P, presumptive; PHC, primary health center; RNTCP, Revised National Tuberculosis Control Program; NVBDCP, National Vector Borne Diseases Control Program; RTI, reproductive tract infection; S, syndromic; SN, serial number; STI, sexually transmitted infection; WHO, World Health Organization.

Table S1 Item(s) that did not attain consensus after 2 rounds of Delphi, and hence, were excluded from the questionnaire (n=26)


Abbreviations: IPPI, intermittent pulse polio immunization; OPD, outpatient department; PHC, primary health center; RKS, Rogi Kalyan Samiti; SN, serial number; WHO, World Health Organization.

Creative Commons License © 2017 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.