Back to Journals » Open Access Journal of Clinical Trials » Volume 8

Protocol-writing support conferences for investigator-initiated clinical trials

Authors Goto M, Muragaki Y, Aruga A

Received 7 October 2015

Accepted for publication 10 February 2016

Published 12 April 2016 Volume 2016:8 Pages 7—12

DOI https://doi.org/10.2147/OAJCT.S97792

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Professor Greg Martin



Masaya Goto,1 Yoshihiro Muragaki,2 Atsushi Aruga1

1Cooperative Major in Advanced Biomedical Sciences, Joint Graduate School of Tokyo Women's Medical University and Waseda University, 2Intelligent Clinical Research and Innovation Center, Tokyo Women's Medical University, Tokyo, Japan

Abstract: In investigator-initiated clinical trials, protocols with inappropriate methods might cause bias. However, insufficient data are available to determine which items are important or difficult to discuss in protocol development. We recorded protocol-writing support conferences to determine what items methodologists and investigators discussed. We obtained approval from all applicants to attend our Intelligent Clinical Research and Innovation Center writing support conferences, recorded all the discussions, characterized them, and sorted the items iteratively. In 1 year, we had 18 conferences: nine early protocol conferences and nine rejected protocol conferences. The latter were rejected by the institutional review board, which requested consultation. The most discussed item was outcomes, accounting for ~20% of the total discussion time. In three trials, the main problem was multiple primary outcomes. The second most discussed item was control. Early protocol conferences had more non-preliminary proposal items than rejected ones (P<0.001). This study showed important items (especially outcomes and control) for investigators to write protocols. Early protocol-writing conferences helped investigators find questionable items.

Keywords: investigator-initiated clinical trials, support, protocol-writing, conferences, recording

Introduction

Protocols are quite important in ensuring high-quality medical research.1 However, many protocols have problems such as incompleteness, ambiguity, and contradictions.2 Protocols with inappropriate methods might cause bias. In particular, investigator-initiated clinical trial protocols have insufficient descriptions.3 Typically, inappropriate descriptions are introduced in the writing process. Bias could be easily avoided, and protocols could be improved by receiving early protocol-writing support.4,5

Research methodological problems often relate to the training and scientific environment. There are several approaches to improve protocols. Some institutes, groups, or support centers in medical schools offer support with guidelines,6 protocol formats, conferences, educational programs, or web systems.7 In medical schools, medical students have research design classes (though they seem inadequate). Some hospitals or projects such as the Accreditation Council for Graduate Medical Education8 offer training for internal medicine residents. This training with a formatted curriculum improves research outlook.9 In Japan, all the medical schools have research centers developed to assist clinical trials.10 Our Intelligent Clinical Research and Innovation Center (iCLIC) provides clinical research support including protocol-writing support conferences. In the conferences, specialists provide not only information on questionable items but also more effective approaches to avoid common pitfalls.

It is important to identify the writing problems faced by investigators and the support needed in such situations. In order to provide more effective support, we conducted an exploratory investigation by recording transcripts of discussions regarding protocol problems between methodologists and applicants. Moreover, we examined the length of these discussions.

Materials and methods

We defined the following technical words according to Directive 2001/20/EC:1

  1. Investigator: An individual responsible for the conduct of a clinical trial at a clinical institution
  2. Protocol: A document that describes the objectives, design, methodology, statistical considerations, and organization of a clinical trial

We obtained the approval of the institutional review board (IRB) of Tokyo Women’s Medical School, Tokyo, Japan. This study was conducted at Tokyo Women’s Medical School iCLIC. We recruited investigators who applied to protocol-writing support conferences from April 2013 to March 2014.

Investigators applied to the protocol-writing support conference by mail. There are two ways to provide applicant support (Figure 1). First, applicants independently apply for conferences, which we call early protocol-writing support conferences. Second, an IRB requests applicants to apply for the conference after protocol rejection for inappropriate writing; we call this rejected protocol-writing as support. In addition, IRBs may provide comments for improvement.

Figure 1 Protocol-writing flow at Tokyo Women’s Medical University Hospital, Tokyo, Japan.
Abbreviation: IRB, institutional review board.

The application contains protocol and typical information (such as deadlines or difficulties faced while writing the protocol), which we call preliminary application items. Before the conference, we explained to the investigator that we would record the conference and request a signed letter of consent. We set the integrated circuit recorder on the table and discussed the problems in person.

We characterized the voice recordings and tagged each discussion using the grounded theory approach11 in reference to Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) 201312 and related research.13,14 We recorded voices, characterized, and sorted. In sorting, a researcher marked key phrases that indicated items. For recordings, we referred to other research.1517 After sorting, we discovered some items and developed new items not listed in SPIRIT 2013 (Table 1). We manually sorted, edited, and questioned the items. Coding disagreements were discussed to obtain consensus. In sorting, there were many confused design items (eg, objectives often effect outcomes). Therefore, we identified them as design issues. Second, we sorted them by functional classification. In sorting, we excluded greetings and short explanations unrelated to the main discussion. In addition, we timed each discussion in 10-second increments. We requested investigators for IRB approval after 3–6 months. Furthermore, we checked trial registrations and IRB websites for approval.

Table 1 items for sorting discussion
Abbreviation: IRB, institutional review board.

In 1 year, we received 18 applications for protocol-writing support, and all the 18 applicants agreed to participate in this study. Their main characteristics are shown in Table 2. Protocol-writing support members were two dedicated staff members at iCLIC, and other methodologists were medical doctors or professors at Tokyo Women’s Medical University Hospital. Not all support members attended every conference because conference times were established according to applicants’, and not support members’, schedules. The number of applicants at each conference was typically 1–2. In one conference, there was no applicant because he was abroad. After the conference we sent him the minutes describing it.

Table 2 Clinical trial characteristics
Abbreviation: RCT, randomized controlled trial.

Results

The total conference duration was 814 minutes, averaging ~45 minutes per conference. Items were discussed for 7.8 minutes on an average (standard deviation [SD] =0.7). Items discussed for over 10 minutes are shown in Figure 2. Other Items discussed for under 10 minutes totaled 212 minutes.

Figure 2 Item times over 10 minutes.

The most discussed item was outcomes, accounting for ~180 minutes or ~20% of the total time (Figure 3). Three of the 12 trials discussing outcomes had multiple primary outcomes. The second most discussed item was control, specifically regarding whether control interventions were appropriate. We had no discussions about protocol versions, study settings, auditing, protocol amendments, confidentiality, data access, ancillary and post trial care, or informed consent materials. There were some short items (duration of less than 10 minutes), including title, roles and responsibilities, trial registration, blinding, biological specimens, patient timeline, and recruitment. Rejected protocol conferences did not have other potential trials, although early protocol conferences often spent much time discussing them.

Figure 3 Item times related to design.

We compared preliminary application items with new ones. Rejected writing support conferences had more consultation time about preliminary items (Figure 4; Pearson’s χ2=805; P<0.001).

Figure 4 Proposal item timing compared to consultation timing.

We had eight documents from nine rejected protocols that disclosed why the IRB rejected them. All the documents mentioned design items as a major problem, and five mentioned outcomes after 36 months. Half of the protocols received approval (Table 3).

Table 3 Progression after conferences
Abbreviation: IRB, institutional review board.

Discussion

We timed and categorized discussed items in protocol-writing support conferences. Long discussions generally reflected item importance because the longer the discussion, the more essential the item. The longest discussion time revolved around outcomes (~20% of the total time). This revealed that outcomes items were much more important than others. In addition, the eight documents from the nine rejected IRB protocols mentioned design items as a major problem, and five mentioned outcomes. These results support the notion that outcome items are very important. The core outcome discussion was to determine the primary outcome. Three trials had multiple primary end points, similar to findings that showed multiple primary outcomes occurred in as many as 38% of trials.18 This reveals that we cannot check protocol items only by description but also must include their appropriateness. More detailed checklists are necessary to avoid protocol problems. One study only checked the missing items;3 however, our study revealed that we should consider not only missing items but also their details. In addition, we must consider outcome reporting bias where 40%–62% of studies had at least one primary outcome that was changed, introduced, or omitted.19 With appropriate outcome discussions, we might prevent changes. The second longest discussed item was control. The rate of controlled trials was 61%, which was larger than another study (44%).3 In control selection, many items require consideration, including backgrounds, objectives, and designs.20 More often than not, we discussed whether the control intervention was standard or comparable care. Of course, investigators were specialists in their studies and wrote protocols with common sense. However, others who do not know the area may wonder whether the control is reasonable because some updated standard care opinions are quite complicated and often divided. Investigators also have difficulty choosing the best treatments when available interventions have trade-offs.21 Though an average of 7.8 items per trial was discussed, eight specific list items were not discussed. Three of them (protocol versions, protocol amendments, and data access) were easy to describe without discussion. These facts show that university administrators or accrediting bodies need to know which items they should attend. Additional research regarding how to improve investigator training in this regard would also be helpful.

We only had two of nine approved early protocols. However, it is difficult to determine the difference these conferences make because of a lack of comparison and small sample size.

Compared to early protocol conferences, rejected ones often had more consulting time about preliminary proposal items. Only early ones had other potential research.

There are some differences between early ones and rejected ones. Early ones often had insufficient protocol checks before support conferences occurred. Rejected ones had IRB checks that noted questionable items, which typically reflected preliminary items. In a sense, IRB comments may replace early protocol checks. There are no studies on protocol checking with clerks, conferences, or IRBs. It is difficult to understand design problems only with published papers.22 To evaluate checking effectiveness, we need data on how protocols change in the writing process.

Meetings are useful for complex requirements.23 When we discuss support methods, we should consider the difference. For example, guidelines,3 protocol formats, conferences, educational programs, and web systems6 would work effectively, especially in early stages. Conferences and IRBs seem better for rejected ones because the problems seem so difficult that the IRB could not agree with minor protocol changes. However, we do not know when it is best to offer protocol-writing support. As our human resources are limited, we should account for effectiveness. In addition, IRB checking is very expensive.24 IRB reexamination and delaying clinical trial schedules would be costly. We believe that the earlier we offer protocol support, the more effective will be the investigator’s writing. In early protocol-writing, investigators easily change entire schedules and sometimes even stop the study in advance. As a result, we could save cost and time compared to an IRB rejection.

Limitations

We had some limitations in our study: small sample size at a single site, non-randomized study, and sorting. First, we had only 18 conference trials in a university. There was one randomized controlled trial, which was smaller than the other studies.3,25 As a result, the time for some items (eg, allocation, blinding, and auditing) that randomized controlled trials need was quite short. In addition, a single site might bias the results because IRBs show extreme variability in their initial responses to standard protocols.26 We need more trials that cover all types of trials planned regardless of time (eg, all early protocols) and locations (eg, university, general hospital, research institute). Second, participants were not randomized; hence, we could not simply compare early and rejected protocols. In particular, early protocol-writing conferences are voluntary for investigators. We need randomized studies to validate early protocol-writing conference effectiveness. Third, there may be some bias in sorting the discussions, although we characterized and identified items in advance for reproducibility. Although there are some limitations, this study effectively highlights current difficulties in writing protocols.

Conclusion

This study shows some important items (especially outcomes and control) in investigator’s writing. Early protocol-writing conferences help investigators find questionable items.

Acknowledgments

The authors would like to thank iCLIC members for attending this study. In particular, Naomi Kobayakawa helped to organize the conferences. We received funds from the Cooperative Major in Advanced Biomedical Sciences, Joint Graduate School of Tokyo Women’s Medical University, and Waseda University, and the teachers from these institutes helped with their valuable discussion.

Disclosure

The authors report no conflicts of interest in this work.


References

1.

European Parliament and Council of the European Union. Directive 2001/20/EC of the European Parliament and of the Council of 4 April 2001 on the approximation of the laws, regulations and administrative provisions of the member states relating to the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use. Official Journal of the European Communities. 2001; 34–44. Available from: http://ec.europa.eu/health/files/eudralex/vol-1/dir_2001_20/dir_2001_20_en.pdf. Accessed February 2, 2016.

2.

Musen MA, Rohn JA, Fagan LM, et al. Knowledge engineering for a clinical trial advice system: uncovering errors in protocol specification. Bull Cancer. 1986;74(3):291–296.

3.

Goto M, Yoshihiro A, Tetsuya U, et al. The quality evaluation of investigator-initiated clinical trial protocols in the University of Tokyo Hospital. Jpn Pharmacol Ther. 2014;42(s2):s135–s147.

4.

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–89.

5.

Yordanov Y, Dechartres A, Porcher R, et al. Avoidable waste of research related to inadequate methods in clinical trials. BMJ. 2015;350:h809.

6.

Chan AW, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Int Med. 2013;158(3):200–207.

7.

Weng CH, Gennari JH, McDonald DW. A collaborative clinical trial protocol writing system. Stud Health Technol Inform. 2004; 107(Pt 2):1481–1486.

8.

Accreditation Council for Graduate Medical Education (ACGME). ACGME program requirements for graduate medical education in internal medicine. Available from: https://www.acgme.org/acgmeweb/Portals/0/PFAssets/2013-PR-FAQ-PIF/140_internal_medicine_07012013.pdf. Accessed February 2, 2016.

9.

Kanna B, Deng C, Erickson SN, et al. The research rotation: competency- based structured and novel approach to research training of internal medicine residents. BMC Med Educ. 2006;6(1):52.

10.

Goto M, Aruga A. Disclosure of information about support for investigator-initiated clinical trials in Japan: an analysis of official medical school websites in 2014. J Tokyo Women’s Med Coll. 2015;85(3):87–92.

11.

Heath H, Cowley S. Developing a grounded theory approach: a comparison of Glaser and Strauss. Intern J Nurs Stud. 2004;41(2):141–150.

12.

Chan AW, Tetzlaff JM, Gøtzsche PC, et al. SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials. BMJ. 2013;346:e7586.

13.

Ioannidis JPA, Greenland S, Hlatky MA, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–175.

14.

Tetzlaff JM, Chan AW, Kitchen J, et al. Guidelines for randomized clinical trial protocol content: a systematic review. Syst Rev. 2012;1:43.

15.

Al-Yateem N. The effect of interview recording on quality of data obtained: a methodological reflection. Nurse Res. 2012;19(4):31–35.

16.

DiCicco-Bloom B, Crabtree BF. The qualitative research interview. Med Educ. 2006;40(4):314–321.

17.

Lee WS, Hwang JY, Lim JE, et al. The effect of videotaping students’ interviews with patients for interview skill education. Korean J Fam Med. 2013;34(2):90–97.

18.

Chan AW, Hróbjartsson A, Haahr MT, et al. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291(20):2457–2465.

19.

Dwan K, Gamble C, Williamson, PR, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PLoS One. 2013;8(7):e66844.

20.

Van Luijn JCF, Van Loenen AC, Gribnau FWJ, et al. Choice of comparator in active control trials of new drugs. Ann Pharmacother. 2008;42:1605–1612.

21.

Dawson L, Zarin DA, Emanuel EJ, et al. Considering usual medical care in clinical trial design. PLoS Med. 2009;6:e10001111.

22.

Johansen HK, Gøtzsche PC. Problems in the design and reporting of trials of antifungal agents encountered during meta-analysis. JAMA. 1999;282:1752–1759.

23.

Berro M, Burnett BK, Fromell GJ, et al. Support for investigator-initiated clinical research involving investigational drugs or devices: the Clinical and Translational Science Award experience. Acad Med. 2011;86(2):217–223.

24.

Byrne MM, Speckman J, Getz K, et al. Variability in the costs of institutional review board oversight. Acad Med. 2006;81(8),708–712.

25.

Califf RM, Zarin DA, Kramer JM, et al. Characteristics of clinical trials registered in ClinicalTrials.gov, 2007–2010. JAMA. 2012; 307(17):1838–1847.

26.

Stair TO, Reed CR, Radeos MS, et al. Variation in institutional review board responses to a standard protocol for a multicenter clinical trial. Acad Emerg Med. 2001;8(6):636–641.

Creative Commons License © 2016 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.