Back to Archived Journals » Comparative Effectiveness Research » Volume 4

Challenges conducting comparative effectiveness research: the Clinical and Health Outcomes Initiative in Comparative Effectiveness (CHOICE) experience

Authors Friedly J, Bauer Z, Comstock B, DiMango E, Ferrara A, Huang S, Israel E, Jarvik J, Nierenberg A, Ong M, Penson D, Smith-Bindman R, Stillman A, Vollmer W, Warren S, Zhan C, Hsia D, Trontell A

Received 13 December 2013

Accepted for publication 17 January 2014

Published 3 May 2014 Volume 2014:4 Pages 1—12

DOI https://doi.org/10.2147/CER.S59136

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3



Janna L Friedly,1,4 Zoya Bauer,2,4 Bryan A Comstock,3,4 Emily DiMango,5 Assiamira Ferrara,6 Susan S Huang,7 Elliot Israel,8 Jeffrey G Jarvik,2,4 Andrew A Nierenberg,9 Michael K Ong,10 David F Penson,11 Rebecca Smith-Bindman,12 Arthur E Stillman,13 William M Vollmer,6 Stephen M Warren,14 Chunliu Zhan,15 David Chu-Wen Hsia,15 Anne Trontell15

1Department of Rehabilitation Medicine, 2Department of Radiology, 3Department of Biostatistics, 4Comparative Effectiveness, Cost and Outcomes Research Center, University of Washington, Seattle, WA, 5Columbia University Medical Center, New York, NY, 6Kaiser Foundation Research Institute, Oakland, 7Division of Infectious Diseases and Health Policy Research Institute, University of California Irvine School of Medicine, Irvine, CA, 8Harvard Medical School, Pulmonary and Critical Care, Allergy and Immunology, Brigham and Women's Hospital, 9Massachusetts General Hospital, Boston, MA, 10Division of General Internal Medicine & Health Services Research, Department of Medicine, David Geffen School of Medicine at UCLA, Los Angeles, CA, 11Vanderbilt University and Tennessee Valley VAHCC, Nashville, TN, 12Departments of Radiology and Biomedical Imaging, Health Policy, Epidemiology and Biostatistics, University of California, San Francisco, SF, 13Emory University, Atlanta, GA, 14Department of Plastic Surgery, Division of Clinical and Translational Research, NYU Langone Medical Center, New York, NY, 15Agency for Healthcare Research and Quality, Rockville, MD, USA

Abstract: The Clinical and Health Outcomes Initiative in Comparative Effectiveness (CHOICE) program, which includes 12 ongoing comparative effectiveness research (CER) trials funded by the Agency for Healthcare Research and Quality under the American Recovery and Reinvestment Act of 2009, has had firsthand experience in dealing with the unique challenges of conducting CER since the trials started in the fall of 2010. This paper will explore the collective experience of the CHOICE program and discuss common challenges and successes the CHOICE investigators have experienced conducting CER research in the United States. The specific aims of this paper are to describe the common features of the CHOICE award studies (observational studies and trials), to summarize the strategies undertaken to address the challenges in conducting comparative effectiveness pragmatic trials and observational studies from the patient, physician, and administrative perspective, and to provide recommendations for improving the efficiency and feasibility of conducting prospective CER studies in the future.

Keywords: comparative effectiveness research, underserved patients, pragmatic clinical trials


Background

In recent years, the United States (US) has made significant investment in comparative effectiveness research (CER) with the goal of providing rigorous evidence on the relative effectiveness of alternative methods of preventing, diagnosing, treating, and managing medical conditions or improving the delivery of care.13 In an effort to discover what works best in real-world practice and not just for carefully selected patients in clinical trials done under tightly controlled conditions, CER often relies upon retrospective observational studies that examine existing insurance claims, medical records, and clinical registries. While large, readily available, and relatively inexpensive to process, these nonrandomized data sources may be confounded or biased in their findings, despite significant progress in analytical methodologies to minimize potential unknown influences on outcomes.

Investigating real-world comparative effectiveness would benefit from prospective studies conducted with intermediate levels of experimental control and randomization. Such studies are generally conducted with patients with diverse medical and demographic characteristics who are seen in routine clinical practice and patients not specifically selected for participation, and are therefore called practical or pragmatic clinical trials.4,5 Although not as strictly controlled as in traditional randomized clinical trials, pragmatic CER trials can provide strong evidence about the comparative effectiveness of one intervention versus another in real-world settings and for targeted populations. Pragmatic trials, however, pose unique logistical challenges along with distinctive benefits, given the diversity of patients and clinical practices being studied. For example, pragmatic trials are less tightly controlled than traditional randomized clinical trials, and require particular attention to potential confounders, but the inclusiveness afforded by pragmatic trials allows for a rich understanding of adoptability and expected impact under routine conditions, and thus can promote widespread application of effective interventions.6

The Institute of Medicine (IOM) has long recognized that health care costs are substantially higher in the US than in other advanced countries without a corresponding improvement in quality of care.3 The rapid-learning health care system is an important strategy described by the IOM to integrate pragmatic research findings into clinical practice quickly and to develop research that is driven by clinical practice needs. The rapid-learning health care system leverages advances in health-data infrastructure with electronic medical record (EMR) systems and pragmatic CER trials to promote improvements in large health care systems. Given that both the underutilization of effective treatments and the overutilization of unnecessary and costly treatments contribute to these soaring costs and lower quality of care, the IOM has strongly recommended the implementation of “learning health care systems” to address these issues.7

Notwithstanding their relevance and appeal, clinical trials for CER face many challenges. First, because CER trials often compare new interventions to existing interventions that already have at least some demonstrated effectiveness, large numbers of patients are often necessary to uncover differences between interventions, and thus require substantial financial resources. CER trials usually compare treatments that are already available, and are often widely used clinically, despite either limited effectiveness data or limited data to demonstrate superiority over newer or other existing treatments that may be less expensive or less invasive. Given this, patients (and treating physicians) are often reluctant to participate in trials in which patients may be randomized to an alternative treatment with which they are unfamiliar. Second, private pharmaceutical and device manufacturers have limited, if any, incentives to invest in this type of research. More often, private industry is interested in rigorous, individualized trials that are often less generalizable and designed to demonstrate maximal treatment effects. Despite potential conflicts of interest that may be associated with private industry-sponsored research, this remains an important source of research funding. As a result, CER trials must turn to funding support from governmental or health care organizations interested in optimizing the delivery and quality of health care of their covered populations, and this funding is often quite limited. Third, given that these trials seek to answer real-world clinical questions about optimal treatments, they by definition include a broad range of populations, often disadvantaged, vulnerable, or medically complex. Underserved or underrepresented groups include women, children, minorities, the elderly or disabled, rural or inner-city residents, the chronically ill and those nearing the end of life, those with low income, and those without insurance or adequate insurance. Underserved and disadvantaged populations are particularly worthy populations to target in CER, given that they suffer from disparities in health care quality and outcomes and are rarely represented in conventional clinical trials.8

Pragmatic studies in CER face many challenges in terms of inclusion of underserved or disadvantaged populations in addition to logistical challenges with study conduct.9 Lack of insurance coverage or difficulty in paying for copayments or deductibles can be significant barriers to participation and retention in CER trials.10 Other vulnerabilities of the underserved, whether due to lack of scientific study, advanced age, complicated health concerns, financial difficulties, or difficulties in adhering to treatment regimens due to poor health, education, and/or literacy, make them likely to respond differently to treatments, due to the very factors that lead to their exclusion from conventional clinical trials. Therefore, adequate participation by underserved populations is important to understanding the heterogeneity of treatment effects in general and specifically for underserved populations.

The Clinical and Health Outcomes Initiative in Comparative Effectiveness (CHOICE) awards are 3-year projects that include both comparative effectiveness randomized trials as well as observational cohort studies or registries. The CHOICE program, which includes 12 ongoing CER trials funded by the Agency for Healthcare Research and Quality (AHRQ) under the American Recovery and Reinvestment Act (ARRA) of 2009, has had firsthand experience in dealing with these unique challenges since the trials started in the fall of 2010. This paper explores the collective experience of the CHOICE program and discusses common challenges and successes the CHOICE investigators have experienced conducting CER in the US. The specific aims of this paper are to describe the common features of the CHOICE award studies (observational studies and trials), to summarize the strategies undertaken to address the challenges in conducting comparative effectiveness pragmatic trials and observational studies from the patient, physician, and administrative perspectives, and to provide recommendations for improving the efficiency and feasibility of conducting prospective CER studies in the future.

Materials and methods

A CHOICE investigator (JF), in collaboration with key AHRQ staff overseeing the CHOICE awards, performed a programmatic evaluation of the 12 ongoing CHOICE projects to determine common experiences in conducting CER. During an annual AHRQ meeting of the CHOICE awardees, the CHOICE primary investigators, as well as the program directors for each project, discussed common issues experienced with regard to conducting this type of research. Several common themes in terms of challenges were identified at this meeting, including recruitment, retention, logistical, and institutional review board (IRB)/regulatory challenges. Taking the information gathered at this meeting, we developed two questionnaires to elicit further feedback about the CHOICE experience, particularly related to the common challenges identified in the meeting. We created two online questionnaires – one for CHOICE primary investigators, and one for program directors – to provide qualitative and quantitative data on their challenges and successes in conducting the proposed research. The questionnaires were developed using an iterative process with feedback from each of the CHOICE primary investigators (PIs) as well as AHRQ staff to ensure content relevance and validity. The online surveys were completed by all 12 of the CHOICE PIs and their respective program directors during a 2-month period in the spring of 2013. Data were collected using REDCap (Research Electronic Data Capture), an online data-collection system hosted by the Data Coordinating Center for the BOLD (Back Pain Outcomes Using Longitudinal Data) project (University of Washington, Seattle, WA).11 From these data, we identified a number of common barriers to conducting the research as well as strategies employed to overcome these challenges, which are discussed here.

Results

Each of the 12 CHOICE PIs and program directors completed the questionnaires (100% response rate). Ten of the CHOICE projects consisted of randomized trials, one was an observational cohort study, and one included both a randomized trial and an observational study. These projects are summarized in Table 1. All of the CHOICE projects had either partially (n=7) or completely (n=5) met the original goals of the initially proposed research at the time of this evaluation. However, half reported having to modify the project aims, study design, and recruitment goals in order to complete the projects within the allotted 3-year time frame of the grant. At the time of this evaluation, only four of the CHOICE PIs indicated that they had secured funding to continue or extend the CHOICE project beyond the 3-year funding period. Only two of the eight investigators who had not yet obtained grant funding to continue these projects stated that they did not plan to apply for further grant funding.

Table 1 Clinical and Health Outcomes Initiative in Comparative Effectiveness (CHOICE) comparative effectiveness research trials at a glance
Abbreviations: MRSA, methicillin-resistant Staphylococcus aureus; ED, emergency department; CT, computed tomography; SPECT MPI, single-photon emission computed tomography myocardial perfusion imaging; vs, versus.

Each of the CHOICE projects included underserved, underrepresented, or disadvantaged populations. These populations ranged from targeted inclusion of women, ethnic minorities, older adults, and children to people with multiple medical and mental health comorbidities. Nine of the eleven CER trials specifically targeted inclusion of women, and all met or exceeded their targets. Only one study specifically targeted inclusion of children under the age of 17 years, and this study nearly met its recruitment goal for this population (enrolled 44%, targeted 50%). Five studies specifically targeted inclusion of older adults, and all met their recruitment goals for this population. Nine of the trials targeted inclusion of Hispanic and African-American patients, and only three did not meet initial recruitment goals for these patients. CHOICE studies also targeted inclusion of patients of low-income status (n=4), uninsured patients (n=5), and medically complex patients (n=7).

The majority (n=9) of the studies involved extra medical encounters, procedures, or time outside usual care for patients to participate in the trial. Five of the trials required office visits specifically for the research study, five required a diagnostic test, five required at least one invasive procedure (all required blood tests during the study, and one required a pulmonary function test), and five required at least one noninvasive procedure. Eight of the studies required significant time commitments from individual patients to complete study questionnaires, the intervention itself, and a wide range of objective outcome measurements.

PIs and program managers identified a number of major recruitment and retention barriers to completing the proposed studies and logistical challenges related to study conduct. The PATIENT (Promoting Adherence to Improve Effectiveness of Cardiovascular Disease Therapies) trial, although a randomized controlled trial, did not require patient recruitment or informed consent, as this was a trial of automated medication-refill reminders without any additional requirements for patients to adhere to study procedures or protocol. For this reason, PATIENT did not experience many of the challenges with recruitment and retention discussed in this manuscript.

Recruitment barriers

From the perspective of the PIs, half cited moderate-to-major barriers in achieving recruitment goals due to overestimation of the number of eligible patients at the chosen recruiting sites, and 42% listed the complexity of the prescreening process in the identification of eligible patients (Table 2). Despite attempts to be as inclusive as possible in these CER trials, investigators still found it challenging to identify patients who met the inclusion criteria. Research program managers indicated the following as major barriers: patients not wanting to participate in research (seven of 12), patients not wanting to be randomized (six of 12), and inability to provide enough financial incentive to patients (four of 12). For example, patients without insurance or who could not afford to cover the costs of copays or travel expenses associated with study-related visits were often unable to participate. This was particularly true for those sites at which IRBs did not allow for coverage of these costs through research funds due to ethical concerns about the potential for coercion. One major concern noted by the investigators was that the inability to recruit patients who were under- or uninsured led to potential recruitment bias that skewed the population being studied.

Table 2 Most frequently cited barriers and strategies employed for recruitment
Abbreviation: EMR, electronic medical record.

Each of the CHOICE projects also identified additional regulatory constraints that limited their ability to recruit disadvantaged populations. One study noted that their institutional IRB cited substantial concerns about the possibility that an uninsured patient may incur additional expenses if they were randomized to a procedure that required additional clinical visits or care that an uninsured patient could not afford. In this case, there was concern that if a patient was randomized to receive a diagnostic test considered the current standard of care but potentially less effective than others, that patient might have to undergo a subsequent follow-up test to appropriately diagnose their condition, and thereby incur additional expense not covered by the study.

Another important barrier identified by the CHOICE program managers was physician unwillingness to allow their patients to participate in the studies. Four projects indicated that treating physicians did not want their patients randomized to receive alternative treatments available within the current standard of care for a variety of reasons, including preferences toward one of the commonly used treatments despite lack of definitive scientific evidence, perceived burden or risk to the patients by participating, and potential financial conflicts of interest if the intervention being studied reimburses less favorably and provider revenue decreases. PIs and program managers also identified reluctance of the recruiting physicians to participate if the study required them to vary from their usual clinical practice or if participation could decrease clinical productivity. In addition, many of the physicians practiced in nonacademic settings and there was little professional incentive to participate in recruitment for the studies, particularly if clinical productivity was impacted. In academic centers, there was typically a greater willingness to participate, due to the contribution to science and potential for improved health care; however, these clinical practices may be less generalizable to the community. The projects that did not meet their goals for recruiting underserved, underrepresented, or disadvantaged patients indicated that the added time and travel burden for patients to undergo the study procedures were barriers to participation. These barriers led to changes in reimbursement protocols for travel expenses to better accommodate those patients.

In addition, several of the projects experienced challenges with quickly advancing changes in health care that necessitated modifying the original protocol in order for the studies to be clinically relevant and feasible. For example, the LESS (Lumbar Epidural Steroid Injections for Spinal Stenosis) trial faced sudden drops in recruitment due to an unanticipated national outbreak of meningitis associated with the treatment being studied (epidural steroid injections), whereas some participants in the bipolar CHOICE had decreased incentive to participate when the treatment medication became generic and therefore more affordable to patients without participation in the study.

Effective recruitment strategies

Physician advocacy and active participation in recruitment were cited by 75% of the respondents as an important strategy to improve recruitment; likewise, 67% indicated the importance of having enthusiastic and proactive research coordinators. Many of the investigators identified recruiting sites based on the expected physician support for the study. Active participation of the site PIs in operational calls and meetings also predicted site success in recruitment.

The CHOICE PIs and program directors cited that the reluctance of some treating physicians to participate required education about the research importance, protocol flexibility to accommodate the needs of the physicians, and at times recruitment from alternative clinical sites to meet recruitment goals. Five of the CHOICE studies increased the number of recruiting sites in order to meet recruitment goals. Most of the studies that increased sites added 25%–50% more sites, but one study (LESS) increased the number of recruiting sites more than twofold (from 6 to 16).

Given the 3-year funding period for these ARRA grants, most investigators found that they quickly needed to adjust their budgets and devote additional resources toward opening new recruiting sites and hiring additional staff to enhance recruitment strategies. High staff turnover and the lack of research infrastructure already in place at recruiting sites made it difficult to quickly ramp up recruitment when the sites encountered slower-than-expected recruitment. Five of the investigators employed a per-subject reimbursement scheme to be able to devote more resources toward sites that were able to meet recruitment goals, rather than on sites that were less productive. Although this reimbursement strategy allowed the investigators to reallocate resources to the high-performing sites, it made recruitment at the smaller sites even more challenging by limiting their available resources to address site-specific recruitment obstacles.

We identified a number of different strategies employed by the CHOICE studies to encourage participation of disadvantaged patients in the trials. Most of the studies were designed to recruit from eligible patients within an integrated health system, and thus the patients by definition had health insurance coverage (n=5) or relied on insurance coverage to cover the costs of the treatments being provided in these studies as they were part of usual medical care (n=5). Three of the five studies specifically targeting inclusion of uninsured patients had the ability to enroll uninsured patients by paying for the study medication through the research budget or through arrangements with clinical sites to provide charity care for these patients. Several studies noted that either uninsured and/or low-income patients welcomed participation in the trial, because it offered a significant advantage to them in terms of receiving medical care or related services that they otherwise would not be able to receive. All of the research studies identified specific nonmonetary benefits that came with participation in the trial that impacted recruitment of disadvantaged patients. Four of the studies cited access to care that would not have otherwise been available (eg, ongoing psychiatry visits for bipolar disorder, home inspections for allergens, earlier access to procedures or treatments, and access to dietary and postpartum lifestyle counseling).

Nine of the eleven clinical trials provided patients with some kind of financial reimbursement for participation in the study, although the amount and type of reimbursement varied tremendously depending on the nature of the study. Five studies provided patients with a set amount of money (either check or gift card) for participation, and four provided reimbursement for specific expenses associated with participation (including travel and other incidental costs). Two studies offered no financial incentives to patients, as these were trials that were minimal risk or were based on interventions at the health system level rather than the patient level. These minimal risk, system-level interventions are increasingly common in pragmatic CER trials, and it will be important to consider developing standards for reimbursement of patients in these types of trials.

In addition, several studies were comparative effectiveness trials of treatments that were potentially safer than usual clinical care, so participation in the study gave patients a chance to receive a lower-risk treatment. For example, in one study, there was a 66% chance of being randomized to receive an ultrasound compared to the usual practice of obtaining a computed tomography scan with radiation exposure. Only one study specifically tailored their payment plan to cover copays, coinsurance, or medication costs depending on insurance availability and income status.

Retention barriers

Despite the use of EMRs in most of the studies, loss of contact with patients due to inaccurate contact information was cited as one of the most important barriers to long-term follow-up with patients (Table 3). In addition, each of these studies targeted AHRQ priority populations including a wide variety of vulnerable and understudied populations. These studies were specifically designed to answer important research questions related to the elderly and patients with mental illness, cancer, or significant cardiopulmonary disease. Inclusion of these vulnerable patients required creativity, tenacity, and flexibility on the part of the research teams to maintain adequate follow-up rates. The common challenges across the studies included maintaining contact with patients who moved residences frequently, who had unstable social situations, who had limited financial resources resulting in periodic discontinuation of phone services, or who had significant comorbid disease, where the capacity to participate in complex follow-up was not a priority or feasible. Each of the studies cited challenges with long-term follow-up, particularly if the study required contact with the patient outside their usual medical appointments or medical care.

Table 3 Most frequently cited barriers and strategies employed for retention
Abbreviation: EMR, electronic medical record.

One unanticipated challenge that several of the studies noted was the loss of insurance coverage during participation in the trial that forced patients to disenroll from the study, either because they could no longer receive care at the clinic or facility or because they were unable to pay for study treatments that are typically covered by insurance.

Effective strategies to increase retention

The CHOICE researchers developed a number of effective strategies to overcome the common retention challenges they faced. Nearly all of the researchers found that delivering a clear message to the patients during the screening/recruitment process regarding the importance of the research and clinical questions being addressed was the single most effective strategy employed. Other effective strategies employed included offering flexibility in terms of methods of contact for the subjects (ie, mail, phone/cell phone, Internet, email) as well as flexibility in research assistant availability (including evening and weekend hours and/or using direct data entry through the use of tablet computers). Another important strategy was to use home visits, which offset lack of funds or access to transportation. Consistently, the CHOICE awardees identified that developing the relationship between the research assistant and the study patient was an important factor that influenced the retention of patients in the studies. Although half of the studies found that monetary incentives for the patients were either somewhat or very effective, the other half felt that this was not an effective strategy to encourage follow-up, particularly if the study required numerous in-person visits, carried extra burden to the patients, or required longer-term follow-up. Investigators either were unable to provide enough incentive due to IRB concerns that higher reimbursement could be considered coercive, were unable to vary the amount of reimbursement based on patient resources (ie, insurance coverage), or did not have adequate funds in the budget to allow for increases in payments. Studies that included flexible methods for paying for study treatments for those with financial barriers were better able to retain patients with unstable health care coverage.

Other effective strategies for retention included coordinating follow-up with routine medical care already being received, conducting careful surveillance of the EMR to identify changes in residence or contact information, and having the participant identify family members, friends, or other contacts to help with locating the participant in the event that they could not be reached directly. Data-collection processes that could be linked to the EMR and integrated with medical care were cited by several investigators as being more effective methods for retention of subjects and collection of long-term outcomes. For example, in the GEM (Diabetes Prevention Strategies in Women with Gestational Diabetes Mellitus) study, the primary outcome (postpartum weight) was collected directly through the EMR in 97% of the women included in the study. However, the collection of patient-reported outcomes (PROs) was not feasible with current EMR systems or for a majority of the patients in the CHOICE studies.

Logistical and data-analysis challenges

Given the nature of these large, pragmatic trials, most of the investigators noted substantial challenges due to site variations in clinical practice (Table 4). These variations made it difficult to standardize recruitment strategies and the interventions without prohibitively impacting usual clinical care. These variations also made it more difficult to determine the effectiveness of treatments, and raised some concern among the CHOICE investigators about the generalizability of results to broader populations. For example, in the LESS trial, clinical sites varied in terms of injection techniques, including choice of steroid medication and dose and volume of injectate, which theoretically could lead to differences in outcomes. In the PATIENT trial, one component of the intervention involved sending email reminders to patients who had not filled their medication prescriptions in response to an automated reminder call. Each site varied in terms of the formatting and content of the emails and internal processes for accomplishing the intervention. Balancing the need for flexibility in the protocols to accommodate for these significant site variations in clinical practice with the need to have meaningful assessment of the chosen outcomes required careful consideration in each of the CHOICE studies. These adaptations were necessary to assure organizational buy-in, and also mimicked the natural variation that would be expected in real-world implementation of the intervention. This flexibility is an essential characteristic of pragmatic trials that allows for interventions to be carried out and sustained in real-world settings.

Table 4 Most frequently cited logistical challenges and effective strategies
Abbreviations: PRO, patient-reported outcome; EMR, electronic medical record.

Most of the CHOICE studies used PROs to measure a variety of outcomes. Collecting PROs required substantial research coordinator time and resources that were only available within the confines of research unless integrated into routine clinical care through EMR data capture. Another challenge faced by the CHOICE investigators included obtaining outside medical records when participating clinics did not have EMRs or when patients sought care outside the health care system in which the study was being conducted. Investigators often found that it was too challenging to obtain complete medical record data or required significant staffing time and money to obtain these medical records.

At the time of the program evaluation, none of the projects had yet completed data analysis, and four projects had not yet begun to analyze their study data. Five projects anticipated major challenges in accounting for site differences in terms of study protocol or predictors of outcomes, while four projects articulated logistical challenges with merging EMR data from different health systems. These four projects required extensive validation of methods for merging data sets and for capturing critical data elements from the EMRs, despite the theoretical availability of compatible data.

Institutional review board challenges and recommendations

Each of the CHOICE projects involved multiple recruiting sites as well as data-coordinating centers, usually necessitating the involvement of multiple IRBs in order to conduct this research (Table 5). The number of IRBs that required approval ranged from one to 55, with a total of 125 initial IRB applications for the 12 CHOICE awards and over 266 modifications and status reports submitted. Investigators reported that on average it took at least 2–4 months to obtain initial IRB approval to conduct the research once the applications were submitted. On average, it required 6–7 months from the start of the study period until the first patient was recruited. This was due in part to the IRB approval process, including the timing of the review committee meetings (often meeting only once per month or every 2 months).

Table 5 Most frequently cited institutional review board (IRB) challenges and effective strategies
Abbreviation: CITI, Collaborative Institutional Training Initiative.

Although several projects utilized centralized IRBs or had some sites cede to the data-coordinating center’s IRB, projects were challenged by obtaining approvals from multiple, independent IRBs. Half of the projects required separate IRB approvals at the data-coordinating centers, and at each of the recruiting sites, only one used a centralized IRB (ie, all clinical sites ceding the approval process to a centralized IRB), while the remainder used cooperative agreements or a mix of centralized and individual IRBs. Investigators frequently cited inconsistencies or occasionally outright conflicts between the IRBs as major barriers that required a substantial investment of time and money in order to obtain appropriate IRB approvals. In addition, many of the projects experienced delays in submission of IRB materials due to other prerequisite reviews such as clinical research committees (departmental review) and radiation-safety and financial assessment committees. These reviews also varied greatly at each institution, and made standardization of IRB and study materials across sites challenging.

Other barriers faced included the burden of obtaining Collaborative Institutional Training Initiative (CITI) training for all ancillary staff and providers involved with the research project, particularly in nonacademic clinical settings with no incentive for participation. For example, several investigators reported that clinical staff assisting with procedures or tests on study patients were required to complete CITI training outside their work hours and without compensation of their time, which often resulted in long delays or staff declining to participate in the research study.

Discussion

The collective and simultaneous experience of the CHOICE awards has provided a unique opportunity for clinical researchers to collaborate on their shared challenges and successes in conducting large, complex CER projects under a compressed time frame in largely understudied and vulnerable populations. These 3-year projects encompass clinically relevant pragmatic research on important conditions affecting these understudied and vulnerable populations. Given the growing interest in the concept of the learning health care system, it is imperative that we improve the ability to rapidly conduct minimal-risk pragmatic and iterative CER trials within the context of large and multiple health care systems.

This analysis of the CHOICE projects complements prior work that has documented similar challenges with the development of infrastructure to support CER conduct for a variety of reasons, including issues related to privacy concerns and the diversity of CER studies requiring high levels of customization of informatics platforms while providing standardization across CER studies.12,13 However, the experience of the CHOICE awardees also highlights issues specifically related to inclusion of underserved, understudied, and medically complex populations in large-scale, rapid CER studies.

One of the stated goals of the investment of ARRA funds into CER research was to build an infrastructure for conducting ongoing research, with the particular goal of understanding the long-term outcomes in these priority populations. Although all of the CHOICE investigators will have partially or completely met their initial project goals, only a third of these projects have secured grant funding to continue the research beyond the initial 3-year project period. The enormous investment in quickly establishing these large research infrastructures (and the effort that has been expended to overcome the significant hurdles to build this infrastructure) has not yet led to sustained conduct of CER for a number of important reasons. Most importantly, the costs associated with conducting these large CER projects were substantial, with a large portion of the research funding being applied toward overcoming the barriers associated with the lack of coordinated IRBs, abstracting data from disparate EMR systems, merging these data to provide meaningful data in larger populations, and hiring personnel to conduct research in community clinics and health systems without existing research infrastructure. In order to sustain these research projects, substantial resources are needed at each of the recruiting sites as well as the data-coordinating centers. Based on the collective experience of the CHOICE projects, we present a number of recommendations for conducting future CER (Table 6).

Table 6 Summary of recommendations for reducing logistical barriers to comparative effectiveness research
Abbreviations: IRB, institutional review board; EMR, electronic medical record; PRO, patient-reported outcome.

Health information-technology recommendations

Given the push to integrate research into clinical care through learning health systems, the integration of PRO data collection into EMR systems and improving the availability of platforms for merging disparate EMR datasets is imperative. Additionally, costs associated with sustaining long-term CER studies could be substantially reduced by automating processes to collect PROs within the context of routine clinical care. Although models for routinely collecting PRO data at points of contact do exist, most health care organizations are not yet able to easily incorporate PROs into clinical care in a sustainable way outside the context of research. This lack of availability of PROs integrated into clinical care limits the sustainability of CER trials and observational studies without significant ongoing resources. In response to this need to develop mechanisms for sharing EMR data to conduct pragmatic, CER trials, the National Institutes of Health Collaboratory has developed a distributed research network (https://www.nihcollaboratory.org). This network provides a mechanism for sharing EMR data with collaborators while protecting health information and reducing the regulatory barriers to sharing data. This type of system can leverage existing EMRs to allow for the conduct of more efficient and effective CER, and should be further developed and implemented broadly.

IRB and regulatory recommendations

Each of the investigators cited having increased support for centralized IRBs, consistent standards between IRBs, and common application forms as the most significant improvements that could be made in order to accomplish large, pragmatic comparative effectiveness trials like these. In addition, recognizing the time and personnel burdens associated with obtaining IRB approvals is essential when developing realistic project timelines and budgets to accomplish multicenter trials of this nature. Given that many of these pragmatic comparative effectiveness trials carry minimal additional risk because they compare commonly used treatments, diagnostic tests, or prevention strategies rather than testing new, experimental, or invasive procedures, developing processes for centralizing IRBs or allowing cooperative agreements for multicenter trials of this nature would greatly reduce the cost of these trials without jeopardizing participant safety. One example of such a program is IRBshare (http://www.irbshare.org), which provides a centralized web portal with shared IRB review documents and review processes for multicenter studies.

Health Insurance Portability and Accountability Act (HIPAA) requirements are designed to protect people’s privacy in terms of personal health care-information release. For minimal-risk studies, the interpretation and implementation of HIPAA regulations can pose significant challenges to conducting minimal-risk studies, particularly when multiple health systems are involved. In addition, stringent requirements to obtain written consent for minimal-risk studies can also pose significant barriers to recruitment into studies. Waivers of informed consent for minimal-risk research and improved methods for data sharing while maintaining protection of health care information are two additional strategies that would significantly reduce barriers to CER conduct.

National improvements in the use of centralized and standardized IRB processes would significantly reduce costs and the length of time needed to conduct CER research. Finally, developing improved mechanisms to allow clinicians to participate in clinical research without negatively affecting productivity through creative incentive restructuring could reduce barriers to conducting community-based clinical research.

Recommendations for improving participation of underserved populations

A number of options exist to increase the participation of uninsured, underinsured, low-income, and other disadvantaged patients in CER studies.14 Based on the CHOICE experience, we have identified a number of methods for patient reimbursement to enhance participation in these CER trials. One objective of this program evaluation is to promote awareness to academic and funding institutions of the potential for payments to patients to influence the participation of key subgroups of interest to the health care system. Careful consideration of study design, including the potential for both monetary and nonmonetary burdens, as well as benefits to disadvantaged populations, must be taken prior to the initiation of CER trials. Policy solutions, such as modifying insurance-payment rules or waivers, could also be enacted to allow low-income patients to participate in CER trials.

Acknowledgments

The authors thank Raveena D Singh, MA (Division of Infectious Diseases and Health Policy Research Institute, University of California Irvine School of Medicine), Amy Waterbury, MPH (Center for Health Research - Kaiser Permanente Northwest), and Susanne Engel (UCLA Health System).

Disclosure

This study was funded by the Agency for Healthcare Research and Quality (AHRQ) in part through award number 1R01HS019222-01. No benefits in any form have been or will be received from a commercial party related directly or indirectly to the subject of this article. The authors are solely responsible for the contents of this paper; no statement herein should be construed as an official position of the AHRQ or the US Department of Health and Human Services.


References

1.

Iglehart JK. Prioritizing comparative-effectiveness research – IOM recommendations. N Engl J Med. 2009;361(4):325–328.

2.

Gray BH. With the inclusion of $1.1 billion for comparative effectiveness research in the 2009 fiscal stimulus bill in the United States, the experience of other countries with such research is of substantial interest in this country. Milbank Q. 2009;87(2):335–338.

3.

Sox HC, Greenfield S. Comparative effectiveness research: a report from the Institute of Medicine. Ann Intern Med. 2009;151(3):203–205.

4.

Luce BR, Kramer JM, Goodman SN, et al. Rethinking randomized clinical trials for comparative effectiveness research: the need for transformational change. Ann Intern Med. 2009;(151):206–210.

5.

Tunis S, Stryer DB, Clancy CM. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. JAMA. 2003;290(12):1624–1632.

6.

Chalkidou K, Tunis S, Whicher D, Fowler R, Zwarenstein M. The role for pragmatic randomized controlled trials (pRCTs) in comparative effectiveness research. Clin Trials. 2012;9(4):436–446.

7.

Greene SM, Reid RJ, Larson EB. Implementing the learning health system: from concept to action. Ann Intern Med. 2012;157(3):207–210.

8.

Agency for Healthcare Research and Quality. AHRQ healthcare quality and disparities reports. Available from: http://www.ahrq.gov/research/findings/nhqrdr/index.html. Accessed January 31, 2014.

9.

Lindenberg CS, Solorzano RM, Vilaro FM, Westbrook LO. Challenges and strategies for conducting intervention research with culturally diverse populations. J Transcult Nurs. 2001;12(2):132–139.

10.

UyBico SJ, Pavel S, Gross CP. Recruiting vulnerable populations into research: a systematic review of recruitment interventions. J Gen Intern Med. 2007;22(6):852–863.

11.

Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap) – a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–381.

12.

Holve E, Segal C, Hamilton Lopez M. Opportunities and challenges for comparative effectiveness research (CER) with electronic clinical data: a perspective from the EDM forum. Med Care. 2012;50 Suppl:S11–S18.

13.

Hirsch BR, Giffin RB, Esmail LC, Tunis SR, Abernethy AP, Murphy SB. Informatics in action: lessons learned in comparative effectiveness research. Cancer J. 2011;17(4):235–238.

14.

Pyatak EA, Blanche EI, Garber SL, et al. Conducting intervention research among underserved populations: lessons learned and recommendations for researchers. Arch Phys Med Rehabil. 2013;94(6):1190–1198.

Creative Commons License © 2014 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.