Back to Journals » Advances in Medical Education and Practice » Volume 13

Pilot Testing a Series of Value-Based Care Training Courses

Authors Kovach JV , Obanua F, Hutchins HM

Received 26 January 2022

Accepted for publication 4 April 2022

Published 11 April 2022 Volume 2022:13 Pages 319—322

DOI https://doi.org/10.2147/AMEP.S360027

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Md Anwarul Azim Majumder



Jamison V Kovach,1 Faith Obanua,1 Holly M Hutchins2

1Project Management Program, University of Houston, Houston, TX, USA; 2College of Education, University of North Texas, Denton, TX, USA

Correspondence: Jamison V Kovach, University of Houston, 4730 Martin Luther King Blvd., Room 300, Houston, TX, 77204, USA, Tel +1 713 743 1704, Fax +1 713 743 4032, Email [email protected]

Purpose: The US healthcare system currently emphasizes volume of services over value. To facilitate changing to a value-based care model, one managed care organization developed a series of online courses designed to teach clinicians value-based care principles and practices. A pilot test was conducted to obtain feedback regarding course content and design, so the courses could be revised prior to their launch.
Patients and Methods: A representative cross section of the courses’ target audience (n = 50) was recruited to participate in the pilot test, and data were collected through an online survey. Descriptive statistics were calculated for responses to close-ended survey questions, and affinity analysis was performed on responses to open-ended survey questions. Issues identified were then categorized as urgent/not urgent with respect to course revision.
Results: Nearly a quarter (24%) of respondents indicated that the course contained incorrect or misleading information. Other responses noted inconsistencies in course content, eg, misspelled or mispronounced words, slides that were hard to read, etc.
Conclusion: This study demonstrates how a pilot test was used as part of a formative assessment to improve course content and design. During a pilot test, attention should be paid to making it easy for participants to provide feedback.

Keywords: medical education, online survey, trial, feedback

Introduction

The prevalent payment model for healthcare in the US emphasizes volume of services over value. As a result, the US spends more on healthcare than any other country ($3.8 trillion USD in 2019), but outcomes rank near the bottom among developed nations.1,2 Managed care organizations (MCOs) work to control the cost of healthcare services while maintaining high-quality patient care. To achieve the goal of “better care and better outcomes at lower cost,” continued efforts are needed to move healthcare delivery systems, such as MCOs, to fully embrace value-based care models that focus on “providing the right care at the right time at reasonable costs.”3 However, to accomplish this requires that clinicians make fundamental changes to their daily practices, which is challenging at best without the necessary supports in place.4

To support clinicians in using value-based care principles and practices, one MCO partnered with the University of Houston to develop a series of online continuing medical education courses. These courses were designed to teach clinicians about the value-based care model. This series consisted of six stand-alone courses and a capstone project that utilized presentations, videos, readings, quizzes, and summative assignments to train clinicians to provide holistic, collaborative care that improves health outcomes while reducing costs. Each course contained 2–3 modules, and each module was designed to be completed in approximately 2–3 hours. After passing all quizzes and peer-reviewed summative assignments in a course, participants received a certificate. To ensure efficacy, the series of courses were pilot tested with a representative cross section of the courses’ target audience.5 This study demonstrates how this pilot test was conducted in order to improve the content and design of this course series prior to its launch.

Materials and Methods

Pilot testing is a valuable part of the instructional design process.5 To pilot test the series of value-based care courses, a survey was conducted to obtain feedback that course designers could use to revise the course before it was launched. The responses from this survey were analyzed to identify common themes and ideas regarding aspects of the course content that needed revision. This study was approved by the Institutional Review Board at the University of Houston where this research was performed.

To obtain meaningful feedback from a representative cross section of the courses’ target audience, an online survey was developed to solicit input regarding the content and design of each course. This survey consisted of the following questions:

  1. Is the course content ordered in a logical way?
  2. Did you encounter any incorrect information or any information you would consider misleading in the course?
  3. Did you encounter any information you would consider irrelevant or inappropriate in the course?

If respondents answered “yes” to any question, they were asked additional questions to identify the module in the course they had in mind and the reason for their response. For example, a “yes” response to question 2 led to the additional questions “Specify the module(s) where you think the course content is incorrect or misleading.” and “Please describe why you consider the information in this module(s) to be incorrect or misleading.”

Next, a representative cross section of the courses’ target audience (n = 50) was recruited to participate in the pilot test through a solicitation email. The selection criteria included those with knowledge and expertise relevant to the course content such as healthcare students, faculty, and other experts (focused on value-based care programs, population heath, quality, etc.), as well as clinicians (ie, physicians, nurses, pharmacists, etc.) and accessibility/user interface designers. To pilot test this series of courses, participants were given one week to review the content of each course such that one course was reviewed each week for six weeks. Additionally, the link to the online survey was sent via email to participants on the first day of the review period for an individual course. A reminder to complete the survey was sent on the last day of the review period, and a final reminder from management, recognizing participants’ time and valuable expertise, was sent the week following the review period. Responses to the online survey were provided by 39 pilot study participants (a 78% response rate), and nearly half (47%) of these respondents were clinicians.

Descriptive statistics were calculated for responses to close-ended survey questions, and affinity analysis (organizing like ideas into groups) was performed on responses to open-ended survey questions. The aim of the latter analysis was to identify opportunities for improvement with respect to the course content and design. That is, the course designers and other representatives from the partner university, as well as those from the MCO managing the pilot study, compiled the responses obtained from the online survey and categorized groups of ideas as urgent/not urgent. Then, revisions were made to the corresponding modules in a course by the designers. Finally, representatives from the MCO managing the pilot study verified that the revisions were made and that they addressed the points raised in survey responses. Hence, changes to the order of content and/or to correct course content information that that was misleading, irrelevant, or inappropriate were made before the courses were launched. Given the nature of the data collected through the online survey, a more detailed content analysis or triangulation with data from other sources was not able to be performed.

Results

From the online survey, 45 unique responses were collected. Of these collected for question 1, only a few (4%) respondents indicated that the course content was not ordered in a logical manner. For question 2, nearly a quarter (24%) of respondents indicated that the course contained incorrect or misleading information. For question 3, some (12%) respondents indicated the course contained irrelevant or inappropriate information. An example of a response to question 2 stated,

Preventative screening – The recommendations for colorectal screening recently changed. Maybe just place a link to this information in the lesson instead of the actual recommendations, because these will always be changing.

Another question 2 response stated,

The example with the patient in the US and the patient in India may have been a bit misleading. In the past year that I have worked in this [value-based care] model, I have noticed that things do not go this smoothly as far as care coordination. However, I think it’s a good comparison as far as the ideal which we are trying to achieve.

Other responses to survey questions generally addressed a couple of common themes regarding course content – minor inconsistencies and more substantial issues. Responses within the theme of minor inconsistencies in course content included items such as misspelled or mispronounced words, slides that were hard to read, and reference to a reading when a video was provided. Responses within the theme of more substantial issues included areas where more data and/or references should be provided, including references in the module’s “resource area” with the digital object identifier (DOI) link to the article, updating pictures to more accurately reflect roles and care environments in which the courses’ target audience works (eg, since the audience is not frontline nurses, pictures of nurses should show them wearing name badges and stethoscopes, not wearing scrubs, and performing active, not passive, activities), and revising features of quizzes that did not work as intended (eg, responses to open-ended questions not automatically scored correctly).

Discussion

Continuing medical education is an important part of the strategy to strengthen the healthcare workforce,6 and it is one of the many supports needed to help clinicians’ adopt new practices such as value-based care.7 While pilot testing is often used to measure improved learning outcomes through pre- and post-tests in medical education and beyond,7–11 pilot testing is a valuable part of the instructional design process because it provides feedback about the efficacy of instruction to those who can revise the course before it is launched.5 In this case, the logic built into the online survey that guided when to administer open-ended questions helped to streamline the survey and make it easy for pilot study participants to complete. Using both closed-ended and open-ended survey questions facilitated the identification of both when there were issues with the course content and/or design, as well as specifically where the issues were located and what these issues were.

Using the feedback collected through the pilot test, the organization was able to revise both minor inconsistencies and more substantial issues with the course content that had been identified as incorrect, misleading, irrelevant, or inappropriate. As recommended for online learning in the health sciences, this put learners at the center of the learning process.12 For example, small inconsistencies in the course content were corrected (eg, misspelled and mispronounced words, wrong use of acronyms, and use of photos inconsistent with content). Additionally, where possible, misrepresentation of information and/or references were removed from the course content and improvements were made to the functionality of quizzes embedded within the courses to ensure they worked as intended.

Conclusion

This study demonstrates how a pilot test was used as part of a formative assessment to improve course content and design. It included data collected from a small sample of pilot test participants and analysis was limited by the nature of the data collected through the online survey. Because larger sample sizes and/or the ability to conduct more advanced analysis may have led to different results, the findings of this study are likely not generalizable to all continuing medical education courses, especially given that this research focused on a series of value-based care courses. However, since the launch of the online value-based care course series in late 2020, more than 8000 participants have enrolled in the series, and the MCO attributes this success to the work done to pilot test this course series prior to its launch.

Participants’ feedback collected through an optional post-course survey has generally been positive. One question posed in this post-course survey aimed at reassessing the needs of learners7 was “What was NOT covered in the course that you wish had been included?” Examples of course participants’ suggestions from approximately 70 unique responses included adding more information about the development of implementation, measures, and integration of value-based care models, an interview with providers where the value-based care approach was already being implemented, as well as the risks involved with value-based care. Representatives from the MCO are currently using this feedback to further enhance the courses in this series. An important next step in this line of research would be to examine whether completion of this series of courses changes clinicians’ behaviors and/or improves patient health outcomes as has been found to generally be the case for continuing medical education courses.13

Ethics

The study reported on in this manuscript was approved by the Institutional Review Board, known as the Committee for the Protection of Human Subjects, at the University of Houston where this research was performed (IRB ID: STUDY00003353). All work was carried out within the ethical standards set forth in the Helsinki and Geneva Declarations, and all participants provided informed consent.

Disclosure

The authors report no conflicts of interest in this work.

References

1. CMS. National health expenditure data; 2021. Available from: tinyurl.com/cm5jfk4. Accessed September 3, 2021.

2. OECD. Health status - key indicators; 2021. Available from: tinyurl.com/yxwvd2rc. Accessed September 3, 2021.

3. Abraham MR, McGann P. Contribution of the transforming clinical practice initiative in advancing the movement to value-based care. Ann Fam Med. 2019;17(Supp 1):S6–S8. doi:10.1370/afm.2425

4. Gwynne M, Agha Z. The physician perspective on reducing healthcare costs. Generations. 2019;43(4):24–29.

5. White BS, Branch RM. Systematic pilot testing as a step in the instructional design process of corporate training and development. Performa Improv Q. 2001;14(3):75–94. doi:10.1111/j.1937-8327.2001.tb00219.x

6. IOM. Redesigning Continuing Education in the Health Professions. Washington, D.C.: The National Academies Press; 2010.

7. Cullen MW, Geske JB, Anavekar NS, et al. Reinvigorating continuing medical education: meeting the challenges of the digital age. Mayo Clin Proc. 2019;94(12):2501–2509. doi:10.1016/j.mayocp.2019.07.004

8. Aronson L, Niehaus B, Lindow J, Robertson PA, O’sullivan PS. Development and pilot testing of a reflective learning guide for medical education. Med Teach. 2011;33(10):e515–e521. doi:10.3109/0142159X.2011.599894

9. Dauer LT, Kelvin JF, Horan CL, St Germain J. Evaluating the effectiveness of a radiation safety training intervention for oncology nurses: a pretest–intervention–posttest study. BMC Med Edu. 2006;6(1):1–10.

10. Ono N, Kiuchi T, Ishikawa H. Development and pilot testing of a novel education method for training medical interpreters. Patient Educ Couns. 2013;93(3):604–611. doi:10.1016/j.pec.2013.09.003

11. Smith T, Williams L, Lyons M, Lewis S. Pilot testing a multiprofessional learning module: lessons learned. Focus Health Prof Educ. 2005;6(3):21–23.

12. Regmi K, Jones L. A systematic review of the factors–enablers and barriers–affecting e-learning in health sciences education. BMC Med Educ. 2020;20(1):1–18. doi:10.1186/s12909-020-02007-6

13. Cervero RM, Gaines JK. The impact of CME on physician performance and patient health outcomes: an updated synthesis of systematic reviews. J Contin Educ Health Prof. 2015;35(2):131–138. doi:10.1002/chp.21290

Creative Commons License © 2022 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.