Using peer observers to assess the quality of cancer multidisciplinary team meetings: a qualitative proof of concept study
Jenny Harris,1 James SA Green,2,3 Nick Sevdalis,4 Cath Taylor1
1Florence Nightingale School of Nursing and Midwifery, King's College London, London, UK; 2Department of Urology, Whipps Cross University Hospital, London, UK; 3Department of Health and Social Care, London South Bank University, London, UK; 4Department of Surgery and Cancer, Imperial College London, London, UK
Background: Multidisciplinary team (MDT) working is well established as the foundation for providing cancer services in the UK and elsewhere. A core activity is the weekly meeting (or case conference/tumor boards) where the treatment recommendations for individual patients are agreed. Evidence suggests that the quality of team working varies across cancer teams, and this may impact negatively on the decision-making process, and ultimately patient care. Feedback on performance by expert observers may improve performance, but can be resource-intensive to implement. This proof of concept study sought to: develop a structured observational assessment tool for use by peers (managers or clinicians from the local workforce) and explore its usability; assess the feasibility of the principle of observational assessment by peers; and explore the views of MDT members and observers about the utility of feedback from observational assessment.
Methods: For tool development, the content was informed by national clinical consensus recommendations for best practice in cancer MDTs and developed in collaboration with an expert steering group. It consisted of ten subdomains of team working observable in MDT meetings that were rated on a 10-point scale (very poor to very good). For observational assessment, a total of 19 peer observers used the tool (assessing performance in 20 cancer teams from four hospitals). For evaluation, telephone interviews with 64 team members and all peer observers were analyzed thematically.
Results: The tool was easy to use and areas for refinement were identified. Peer observers were identified and most indicated that undertaking observation was feasible. MDT members generally reported that observational assessment and feedback was useful, with the potential to facilitate improvements in team working.
Conclusion: This study suggests that observation and feedback by peers may provide a feasible and acceptable approach to enhance MDT performance. Further tool refinement and validation is required.
Keywords: cancer, multidisciplinary team, team working, observational assessment
Cancer multidisciplinary teams (MDTs) are well established in many countries as being central to the management and delivery of care.1 At the core of the formalized structure in the UK (and elsewhere) is the weekly MDT meeting (MDM), sometimes known as a multidisciplinary tumor board or case conference in other countries.2 This brings together cancer health professionals (which may include surgical, medical, and diagnostic personnel, and nursing specialists) to discuss options for the management of individual cancer patients. The MDM, in best practice, immediately precedes the outpatient consultation with patients who were reviewed in the MDM.
Although there is accumulating evidence of the benefit of MDT working,1,3,4 including an association with improved survival and reduced variation in survival,5–8 there is also evidence of variability in operational performance and the decision-making processes of MDMs.9–11 A key role of specialist cancer nurses in MDMs is to facilitate inclusion of the patient’s holistic needs and preferences for treatment in decision-making; however, the evidence suggests that decisions in cancer MDMs are often driven by physicians on the basis of biomedical rather than patient-centered information and that nurses have little involvement in case discussions,9,12,13 despite them being well placed to add holistic information that impacts greatly on a patient’s treatment choice. Poor quality discussions in MDMs, in particular failure to consider all relevent information, may lead to recommendations that are not implemented in practice14–16 and/or delays in treatment.17,18
MDMs are an expensive resource.19 In terms of staff time alone, it is estimated that they cost the UK National Health Service around £100 million a year for preparation and attendance.1 In an attempt to improve standards and reduce variations in performance, all cancer MDTs in the UK undergo a mandatory “peer review” program involving self-assessment and, in some cases, additional review by a panel of independent health professionals and service users, to assess their compliance with national tumor-specific guidelines.11 However, this formalized process does not involve assessment of team working within the MDM (eg, leadership/chairing, decision-making processes, patient-centeredness, administration, and organization of meetings), but concentrates on standards such as patient pathways, timelines, access, and other auditable elements.
Observational case studies of cancer MDMs have provided insights into the processes involved in MDT decision-making,12,20 but such methods tend to be time consuming and require methodological expertise.21 Structured observational assessment and feedback have proved a useful technique to help drive improvements in the way health care teams work together, for example, to improve patient safety during surgical procedures22 and in anesthesia.23 This approach allows for team work and communication to be measured in real life or simulated scenarios, rather than based on self-assessment by MDT members. Observation can help to elucidate areas where performance could be improved, that MDT members may not have been aware of themselves.24
Two observational tools have been developed specifically to assess cancer MDM performance, ie, the Metric for the Observation of Decision-making,24 which enables assessment of discussion content and MDT member contribution, and the Observational Assessment Rating Scale,25 which also includes assessments of the quality of team working, patient-centered clinical decision-making, infrastructure, and organization of the meeting. The incorporation of assessment and feedback by peers into routine clinical practice may be beneficial and a cost-effective intervention for encouraging health professional development,26 but the need for more formalized mechanisms (eg, standardized processes, measurable quality standards) to facilitate this has been recognized.27
The expense of MDMs, coupled with their influence on clinical outcomes, highlights the need to ensure that they are functioning optimally. A first step towards this is having appropriate methods of assessment that are feasible to implement, acceptable to MDT members, and have the potential to facilitate improvements in MDM functioning. To our knowledge, no research has investigated: the feasibility of integrating observational assessment into routine clinical practice in cancer MDTs; whether the current workforce has the skills to provide useful feedback without being extensively trained and the capacity to undertake such assessments; or if MDT members will find assessment and feedback from peers acceptable and useful. Therefore, this proof of concept study aimed to: develop a structured peer observational assessment tool and explore its usability; explore whether the principle of observational assessment by peers is feasible; and explore the views of MDT members and observers about the utility of feedback from observational assessment.
Materials and methods
Design and organizational context
A qualitative proof of concept study was designed to explore observational assessment of cancer MDT meetings by peers (Figure 1). Proof of concept (or principle) studies allow for the investigation of feasibility of a concept at an early, less costly, stage in the research process before developing a prototype in later stages.
Figure 1 Study design.
A convenience sample of twenty volunteer MDTs within four National Health Service hospital trusts that provide secondary health services (hereafter referred to as hospitals) were recruited (Table 1). Two of the hospital sites were in the Midlands, one was in the south of England, and one was in London. The hospitals served local populations ranging from approximately 750,000 to 2.5 million people. Two were university/teaching hospitals. The 20 MDTs represented ten different tumor types (head and neck, urology, colorectal, lung, gynecological, skin, breast, hepatobiliary, sarcoma, and upper gastrointestinal) and included three specialist MDTs (ie, MDTs that accept referrals from other hospitals). Five of the MDTs used videoconferencing to allow members from multiple sites to participate in the MDM.
Development of the observational tool
A framework for assessment of team performance in cancer MDMs is provided by a document entitled “The Characteristics of an Effective MDT”.28 Underpinned by a survey completed by over 2,000 cancer MDT members in the UK, this document contains nearly 100 recommendations for effective cancer team working organized under 17 domains, many of which relate to the core MDT meeting function of cancer MDT working. We have previously used this document as a framework to underpin a self-assessment questionnaire, ie, the Team Evaluation and Assessment Measure,29 for MDT members to self-assess their performance, and it provided the foundation for the design of the observational tool used in this study.
The appropriateness of each of the 17 subdomains of team working28 for observational assessment was determined by obtaining views from a national MDT development steering group (consisting of clinical and academic cancer leaders, formed to guide the study). In total, ten sub-domains were identified to contain elements that would be observable in cancer MDMs. These were: attendance at MDT meetings, leadership of the MDT and chairing in MDT meetings, team working and culture, personal development and training (eg, whether members use the meeting as an opportunity to share learning and best practice), physical environment of the meeting venue, technology and equipment available for use in MDT meetings, organization and administration during meetings, post-meeting coordination of services (eg, the clarity of “next steps” in the meeting discussion), patient-centered care, and clinical decision-making processes (Figure 1). In addition, the frequency of distractions (eg, cell phone calls, other conversations) that occur during the meeting are recorded. These areas were translated into the tool content by the authors. In the resulting tool, observers rated the performance of the MDT in relation to each domain on a 10-point scale (1, very poor; 10 very good; see Figure 2). Brief descriptions of a “very good” rating for each of the ten domains were provided based on the recommendations in “The Characteristics of an Effective MDT”.28 Observers were also asked to provide comments to support their ratings, and to provide a free-text description of their overall impression of the meeting. They were also asked to record the frequency of the different types of distractions.
Figure 2 Abbreviated version of the prototype Observational Tool ©.
Cancer service managers were asked to identify clinicians (defined here as senior physicians, nurses, or allied health professionals) or managers within the hospital who were familiar with cancer MDT working and would be able to provide constructive feedback to the MDTs. These nominated peer observers were provided with the prototype observational tool, including integral instructions for its use, and a copy of “The Characteristics of an Effective MDT”.28
Meetings were either observed in vivo or videorecorded and viewed by observers at a later date (to allow observer anonymity). MDT members were asked to maintain patient anonymity by allocating ID numbers to patients. Assessments were completed using the observational tool and sent to MDT members approximately 5 weeks after the observed meeting had taken place.
Brief, semistructured telephone interviews, using a topic guide informed by the study aims, were held with the lead for each MDT (n=20), up to three other MDT members (purposively selected to represent a range of disciplines, n=44), and all peer observers (n=19). Interviews were undertaken by two experienced mixed methods researchers (JH and KB) with 5–10 years’ experience in conducting research interviews.
Interviews were analyzed using thematic content analysis.30 An initial coding scheme was developed (by CT and JH) by reading through the transcripts and independently identifying the main themes. Any differences were then discussed and a final coding scheme agreed, consisting of themes and subthemes (where applicable). Interviews were coded by two experienced researchers trained in use of the coding scheme (LH and RC). Thirty percent of their coding was randomly checked (by JH). The purpose of this form of content analysis is to produce a numerically based descriptive summary; therefore, the frequency of main themes is presented to assist the reader to interpret findings. Equal weight was given to deviant/disconfirming responses and they are also included in the results tables and summaries.
As part of the assessment of viability, peer observers recorded the time taken to complete their observational assessments (including preparation, observation, and completing their assessment). Analysis was undertaken using IBM® SPSS® Statistics for Windows version 19 software (IBM Corp, Armonk, NY, USA). The protocol for this study was reviewed by the UK National Research Ethics Service and approved as a service development project.
Usability of the observational tool
All observers reported that the tool was easy to use: “It was easy to use, the instructions were clear” (manager, observer). Some provided ideas about further improvement of the tool, including reducing the rating scale and/or providing more detailed descriptive anchors for points on the rating scale to help ensure reliability across different time periods and observers, and to help the MDT interpret the observer’s assessment (53%, 10/19): “Giving a score was sometimes difficult, you need a benchmark when rating and a scale of 0–10 is too wide, I think a 4 or 5 point scale, with a written description of each anchor and guidance, would be good” (noncancer doctor, observer).
Viability of observational assessment by peers
Characteristics and experience of peer observers
The 19 peer observers included seven managers, eight clinicians (four of whom also held managerial roles), and four MDT administrators (described as “cancer pathway coordinators”). The clinicians included six cancer consultant physicians, one noncancer consultant physician, and one senior allied health professional. Peer observers described their prior experience of MDMs as having either participated in MDMs as a member (63%, 12/19), or having worked with and/or observed MDMs as part of their job (37%, 7/19). None of the peer observers had previous experience of using an observational assessment tool.
When asked for views regarding the importance of a professional group of peer observers, some responded that knowledge or experience of MDT working and ability to provide objective feedback was more important than a professional group per se (58%, 11/19), whilst others felt that a professional background was important, stating that a clinical viewpoint was necessary in order to judge appropriately all elements of MDT working, such as clinical decision-making (37%, 7/19, three of whom were clinicians). All MDT administrators who undertook observations reported that peer observers needed to be more senior than themselves because they felt the MDT would be more likely to make changes if the feedback came from a more senior observer: “Although I was happy and felt able to do it, I think the observer should really be someone at a higher level (than me) to ensure that the MDT listens to what is said and makes the changes needed” (cancer pathway coordinator, observer).
Observers’ views about feasibility of observational assessment
Two thirds (12/19) of the peer observers stated that it was feasible for them to undertake the assessments alongside their usual job, and the remainder said the main challenge was finding time due to other workload pressures. However, despite these challenges, two of these observers stated they would be happy to observe again, and a third suggested there was a learning curve that would make it easier next time: “This was the first time I had done anything like this, so there was probably more preparation, and it would be easier if I did it again” (cancer consultant doctor, peer observer). Only two observers (11%) said they would like to observe a second meeting to provide further validation of their assessments, but neither did this, stating that the main reason was not having sufficient time. The mean time reported to complete observational assessments (observing the MDM and writing their feedback), by the professional group was: 117 minutes for clinicians (range 90–135 minutes, including 89 minutes observing the MDM); 151 minutes for managers (range 70–270 minutes, including 72 minutes observing the MDM); and 234 minutes for administrators (range 180–300 minutes, including 80 minutes observing the MDM).
Views about utility of feedback from observational assessment
Views of MDT members
In total, 64 interviews were conducted with MDT members (32 consultant physicians and 14 nursing, 15 administrative, and three allied health professionals). Of these, 25 had been observed in vivo, and 39 via a videorecorded meeting. Three quarters described the observational assessment and feedback they received as a useful process that provided valuable and representative feedback (73%, 47/64, Table 2), and that it had the potential to influence practice. As described by one surgeon: “It gives an opportunity to see where improvements are needed and where we are doing well. It was useful …we’ve already made some changes.”
Table 2 Views of MDT members and peer observers about the utility of observational assessment
A minority commented on the limitations of observation, including whether the process of being observed alters “normal” behavior (ie, the observer effect) and whether it was possible to make valid assessments of the team on the basis of one observed meeting (20%, 13/64, Table 2). Seven MDT members expressed the view that observational assessment was not useful for these reasons. There were no apparent differences in MDT members’ perceptions of the usefulness of peer assessment according to the professional background of the observer.
Views of observers
All observers felt that providing their observational feedback was useful to the MDT(s) they observed (and to themselves), and felt it could inform improvements in MDT performance (100%, 19/19). Two thirds of the peer observers were confident that their feedback was representative of the usual performance in the MDM they observed (68%, 13/19); a minority were more tentative, instead describing limitations such as the possibility of an observer effect and their own concerns about potential problems caused by providing negative feedback to MDTs within one’s own workplace, although no such problems actually occurred during the study (Table 2).
This study provides proof of concept that observational assessment and feedback by peers of the quality of cancer MDMs could be feasible and provide utility in relation to the appraisal of MDM performance. Further, although changes to team working went beyond the scope of this proof of concept study, most MDT members reported that peer assessment and feedback could facilitate improvements in MDM functioning.
Most cancer service managers were able to identify and recruit clinicians or managers to act as peer observers. However, at one hospital, the observers (all experienced at MDT working but in an administrative role) viewed themselves as insufficiently senior for their assessment to have an impact on the team. The importance of seniority in observational assessment of health care MDTs has been highlighted in a recent study that used an expert consensus Delphi approach to determine the guidelines for team assessment (observation) in perioperative care. The expert guideline is that if observers are not experts in observational techniques (such as psychologists or specialists in human factors), they should be senior peers in order to have the necessary skills set.31
Observers generally felt confident in their ability to form a fair assessment from observing just one meeting, although some MDT members and observers raised concerns about this. The possibility that there could have been a Hawthorn effect (ie, the presence of an observer or camera altering usual behavior) was raised by some MDT members and observers. This is a common cause of concern in observational studies,32 although previous research with cancer MDTs has found that MDT members report that meetings are still “typical” despite the presence of an observer and/or video camera, with MDT members soon forgetting about the presence of these.25 Indeed, most MDT members we interviewed stated that observers’ comments were representative of their general performance. Further, if such assessments become part of routine practice, it is likely that any impact on usual behavior would be minimized as MDT members become accustomed to observational assessment.33
The observational assessment tool developed for this study was based on “The Characteristics of an Effective MDT”.28 Although developed within the context of the UK health service, it is likely that the ten domains identified as important for optimal functioning of MDMs (see Figure 2) would be equally relevant and important in cancer MDTs outside the UK. As highlighted by some observers, the ease of use of the tool could be improved if the size of the scale was reduced and if it incorporated a behaviorally anchored rating scale for the extremes and mid point of the rating scale; such an approach is common for performance measures34 and has been used by other observational assessment tools.24 These changes should make the scale more user-friendly for peer observers, whilst not adversely effecting scale reliability;35,36 further research is needed to develop these and then to evaluate the validity and reliability of the tool.
Although structured observational assessment tools for use in cancer MDMs have been developed, they cannot be used easily without expert training and have a significant learning curve.24,25 This study suggests that peers with no formal training in observational techniques (but provided with short written guidance) can complete structured observational assessments using our study tool. A Cochrane review of 49 randomized trials suggests that providing feedback can highlight areas for improvement and lead to improvements in health care professional practice.37 Similarly, the present study suggests that cancer MDT members felt that feedback would be useful to facilitate change. Previous work suggests that MDTs value feedback about their effectiveness based on self-assessment questionnaires,29 and future work is needed to determine the impact of feedback on objective changes to MDT working (and ultimately in relation to improvements in patient care).
This study has some limitations. It was a small proof of concept study and its aims were exploratory; further, it needs to be replicated with a broader range of organizations, MDTs, and peer observers. Only subjective views about the usefulness of observational assessment and feedback to facilitate change were assessed, and not whether feedback led to actual change because time limitations meant we could only enquire about immediate change (not longer-term impact). The reliability and validity of scoring by assessors also needs to be further evaluated. The hospitals and MDTs were a convenience sample, which may have biased the findings, for example, by including MDTs more accepting of assessment compared with other MDTs. Further, as with all qualitative studies, our findings reflect the opinions of individuals within a specific context, therefore the aim is not generalizability. However, similar themes were evident from perceptions of MDT members and observers across all hospitals, regardless of tumor type, professional group, or geographic location, and we specifically sought and presented any deviant/disconfirming findings.
Observational assessment and feedback by peers could provide a useful approach to MDT development. This proof of concept study suggests that observational assessment by peers could be a feasible and acceptable approach that may enhance MDT performance. With further validation, this could provide a useful means by which to help improve MDM functioning and share good practice.
We would like to thank the team members and observers who participated in this research, and the trust personnel who facilitated their involvement; other affiliate members of Green Cross Medical Ltd who have supported this work; and the National Cancer Action Team MDT Development steering group and subcommittee members for their input and comments. This work was supported in part by the National Cancer Action Team. NS is affiliated with the Imperial Patient Safety Translational Research Centre (http://www.cpssq.org), which is funded by the National Institute for Health Research. We would also like to thank Dr Katrina Brown for research assistance, and Dr Louise Hull and Miss Ruth Collins for assisting with the content analysis.
JSAG has received funding from the National Cancer Action Team for the development of a team training/feedback system for cancer MDTs through Green Cross Medical Ltd. NS has been a paid advisor to Green Cross Medical Ltd. CT and JH were funded for this work through a subcontract between King’s College London and Green Cross Medical Ltd.
Taylor C, Munro AJ, Glynne-Jones R, et al. Multidisciplinary team working in cancer: what is the evidence? BMJ. 2010;340:c951.
Jalil R, Akhter W, Lamb BW, et al. Validation of team performance assessment for multidisciplinary tumor boards. J Urol. March 11, 2014. [Epub ahead of print.]
Taylor C, Shewbridge A, Harris J, Green JS. Benefits of multidisciplinary team work in the management of breast cancer. Breast Cancer. 2013;5:79–85.
Rohan E, Bausch J. Climbing Everest: oncology work as an expedition in caring. J Psychosoc Oncol. 2009;27:84–118.
Kersten C, Cvancarova M, Mjaland S. Does in house availability of multidisciplinary teams increase survival in upper gastrointestinal cancer? World J Gastrointest Oncol. 2013;5:60–67.
Kesson EM, Allardice GM, George WD, Burns HJ, Morrison DS. Effects of multidisciplinary team working on breast cancer survival: retrospective, comparative, interventional cohort study of 13 722 women. BMJ. 2012;344:e2718.
Gomella LG, Jianqing L, Hoffman-Censits J, et al. Enhancing prostate cancer care through the multidisciplinary clinic approach: a 15-year experience. J Oncol Pract. 2010;6:e5–e10.
Eaker S, Dickam P, Hellstrom V, Zack MM, Ahlqren J, Holmberg L. Regional differences in breast cancer survival despite common guidelines. Cancer Epidemiol Biomarkers Prev. 2005;14:2914–2918.
Haward R, Amir Z, Borrill C, et al. Breast cancer teams: the impact of constitution, new cancer workload, and methods of operation on their effectiveness. Br J Cancer. 2003;89:15–22.
Lamb BW, Sevdalis N, Arora S, Pinto A, Vincent C, Green JS. Teamwork and team decision-making at multi-disciplinary cancer conferences: barriers, facilitators, and opportunities for improvement. World J Surg. 2011;35:1970–1976.
National Cancer Action Team. National Peer Review Programme. Report 2011/2012: An overview of the findings from the 2011/2012 National Cancer Peer Review of cancer service in England. London, UK: National Cancer Action Team; 2012. Available from: http://www.cquins.nhs.uk/?menu=info. Accessed March 28, 2014.
Lanceley A, Savage J, Menon U, Jacobs I. Influences on multidisciplinary team decision-making. Int J Gynecol Cancer. 2008;18:215–222.
Rowland S, Callen J. A qualitative analysis of communication between members of a hospital-based multidisciplinary lung cancer team. Eur J Cancer Care. 2013;22:22–31.
Blazeby JM, Wilson L, Metcalfe C, Nicklin J, English R, Donovan JL. Analysis of clinical decision-making in multi-disciplinary cancer teams. Ann Oncol. 2006;17:457–460.
English R, Metcalfe C, Day J, Rayter Z, Blazeby JM; Breast Cancer Multi-Disciplinary Team. A prospective analysis of implementation of multi-disciplinary team decisions in breast cancer. Breast J. 2012;18:456–463.
Lamb B, Brown KF, Nagpal K, Vincent C, Green JS, Sevdalis N. Quality of care management decisions by multidisciplinary cancer teams: a systematic review. Ann Surg Oncol. 2011;18:2116–2125.
Goolam-Hoosen T, Metcalfe C, Cameron A, Rocos B, Falk S, Blazeby JM. Waiting times for cancer treatment: the impact of multidisciplinary team meetings. Behav Inf Technol. 2004;30:467–471.
Leo F, Venissac N, Poudenx M, Otto J, Mouroux J; Groupe d’Oncologie Thoracique Azuréen. Multidisciplinary management of lung cancer: how to test its efficacy? J Thorac Oncol. 2007;2:69–72.
De Ieso PB, Coward JI, Letsa I, et al. A study of the decision outcomes and financial costs of multidisciplinary team meetings (MDMs) in oncology. Br J Cancer. 2013;109:2295–2300.
Kidger J, Murdoch J, Donovan JL, Blazeby JM. Clinical decision-making in a multidisciplinary gynaecological cancer team: a qualitative study. BJOG. 2003;116;511–517.
Lamb B, Sevdalis N, Mostafid H, Vincent C, Green JS. Quality improvement in multidisciplinary cancer teams: an investigation of teamwork and clinical decision-making and cross-validation of assessments. Ann Surg Oncol. 2011;18:3535–3543.
Udre S, Koutantji M, Sevdalis N, et al. Multidisciplinary crisis simulations: the way forward for training surgical teams. World J Surg. 2007;31:1843–1853.
Fletcher G, Flin R, McGeorge P. Anaesthetists’ non-technical skills (ANTS): evaluation of a behavioural marker system. Br J Anaesth. 2003;90:580–588.
Lamb B, Wong HW, Vincent C, Green JS, Sevdalis N. Teamwork and team performance in multidisciplinary cancer teams: development and evaluation of an observational tool. BMJ Qual Saf. 2011;10:849–856.
Taylor C, Atkins L, Richardson M, Tarrant R, Ramirez AJ. Measuring the quality of MDT working: an observational approach. BMC Cancer. 2012;12:202.
Davys D, Jones V. Peer observation: a tool for continuing professional development. Int J Ther Rehabil. 2007;14:489–493.
Hogston R. Evaluating the quality of nursing care through peer review and reflection; the findings of a qualitative study. Int J Nurs Stud. 1995;32:162–161.
National Cancer Action Team. Characteristics of an Effective MDT. London, UK: National Cancer Action Team; 2010. Available from: http://webarchive.nationalarchives.gov.uk/20130513211237/ http://www.ncat.nhs.uk/. Accessed March 28, 2014.
Taylor C, Brown K, Lamb B, Harris J, Sevdalis N, Green JS. Developing and testing TEAM (Team Evaluation and Assessment Measure), a self-assessment tool to improve cancer multidisciplinary teamwork. Ann Surg Oncol. 2012;19:4019–4027.
Neuendorf KA. The Content Analysis Guidebook. Thousand Oaks, CA, USA: Sage Publications Inc.; 2002.
Hull LE, Arora S, Symons NR, et al. Training faculty in non-technical skills assessment: national guidelines on program requirements. Ann Surg. 2013;258:370–375.
Fitzpatrick R, Boulton M. Qualitative methods for assessing health care. Qual Health Care. 1994;3:107–113.
Caldwell K, Atwal A. Non-participant observation: using video tapes to collect data in nursing research. Nurse Res. 2005;13:42–54.
Bradburn NM, Sudman S, Wansink B. Asking Questions. The Definitive Guide to Questionnaire Design for Market Research, Political Polls and Social and Health Questionnaires. San Francisco, CA, USA: Jossey-Bass; 2004.
Lissitz RW, Green SB. The effect of the number of scale points on reliability: a Monte Carlo approach. J Appl Psychol. 1975;60:10–13.
Norman G. Likert scales, levels of measurement and the ‘laws’ of statistics. Adv Health Sci Educ Theory Pract. 2010;15:625–632.
Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2012;6:CD000259.
This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.Download Article [PDF]