Back to Journals » Advances in Medical Education and Practice » Volume 9

Does the perception of severity of medical error differ between varying levels of clinical seniority?

Authors Khan I , Arsanious M

Received 15 July 2017

Accepted for publication 5 December 2017

Published 15 June 2018 Volume 2018:9 Pages 443—452

DOI https://doi.org/10.2147/AMEP.S146474

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3

Editor who approved publication: Dr Md Anwarul Azim Majumder



Iqbal Khan,1 Meret Arsanious2

1Northampton General Hospital NHS Trust, Northampton, UK; 2Epsom and St Helier University Hospitals NHS Trust, London, UK

Background and purpose: The Francis Report called for a more “open culture” to empower health care staff to report medical errors. However, there are differing opinions amongst doctors as to what constitutes a medical error, and no previous study has investigated whether the perception of medical errors varies with clinical seniority.
Methods: A prospective study comprising medical students (s), junior doctors (jd), and consultants (c) from one Deanery was conducted, where participants were anonymously assessed on their perceptions of error in eight different hypothetical scenarios using a numerical scale (1–10). Scenarios were reviewed for face validity and pilot tested before implementation. A statistician prospectively determined the number of participants to ensure the study was sufficiently powerful. Scenario ratings were analyzed using non-parametric statistical tests and free-text answers were analyzed by immersion and crystallization.
Results: Two hundred thirteen participants were recruited with near equal distribution in gender (51%:49%, F:M) and clinical seniority (36%:34%:30%, s:jd:c, respectively). Significant difference was shown in three out of the eight scenarios between the students and the consultants, and in one of those three between junior doctors and students. Qualitative analysis found various factors that contribute to participants’ decision regarding error severity. Students and junior doctors commented on potential consequences in greater detail, but consultants showed greater awareness of the latent factors contributing to error.
Conclusion: Heterogeneity in answers was seen within each of the cohorts. The most influential factors were scenario outcome and potential consequences. Latent factors, such as error circumstances and participant’s empathy, also contributed to response. There were significant differences in the scores between medical students and consultants in some scenarios which may be related to clinical experience. The heterogeneity of answers suggests there is scope for improvement in medical error education.

Keywords:
medical error, harm, medical students, junior doctors, consultant

 

A Letter to the Editor has been received and published for this article.

Introduction

In the last decade since the American Institute of Medicine’s report To Err is Human,1 there has been increased awareness of the high prevalence of medical error within secondary care. In the UK, one study showed that 10.8% of patients experienced an in-hospital adverse event, about half of which were deemed preventable,2 costing one thousand million pounds per annum in lost bed days not including the wider costs of lost working time and disability benefits, nor the subsequent human cost of pain and psychological trauma.3 A report by Robert Francis QC,4 investigating the poor standards at Mid-Staffordshire NHS Trust, highlighted how unsafe practices continue to exist within the NHS. He called for nationwide reform to “protect patients from unacceptable risks of harm” and stated the need for “a consistent organizational approach to embedding an open and learning reporting culture”.

Traditionally, error analysis has focused on individuals as the main instigators of accidents, when in actual fact they tend to be the inheritors of system defects.5 The focus now is on how organizational factors can be modified to shape and influence the behavior of individuals.1,5 In addition, there has been a drive to move away from the traditional “name, blame, and shame” of staff, which only encourages them to hide their mistakes (thus preventing recognition, analysis, and correction of underlying causes),5 and a greater drive toward encouraging “openness” in incidence reporting such as was mentioned by Robert Francis QC.2 This can generate the information needed to create “high-reliability organizations”5 – organizations which use safety intelligence generated from frontline staff to guide adaptive and constructive changes without waiting for accidents to occur.58

Levels of patient safety and medical error knowledge varies greatly amongst different levels of junior doctors,6 and various studies have shown snapshots of physicians’ attitudes toward error,912 but very few studies have investigated the factors that contribute to a physicians’ perception of whether an incident is an error or not.9,13,14 Also, there have been no comparative studies on how perceptions of “error” vary with clinical seniority and what factors may contribute to these perceptions.

Our study aims to determine whether medical students, junior doctors, and consultants differ in their perception of the severity of a clinical error and what factors influence their decision. This may help tailor medical education at various levels to promote safe working practices, thus improving patient safety.

Methods

Participants were recruited by convenience sampling and were stratified according to their level of clinical experience (consultants, junior doctors in their foundation years, and medical students). Data were collected over a 10-week period until there were sufficient numbers to power the study for statistically significant results. This was calculated prospectively by a statistician prior to data collection. An introductory email which was linked to an anonymous online survey (via SurveyMonkey) was sent to final year and penultimate year medical students from two local universities and junior doctors from two local foundation program schools. The consultants were recruited from one district general hospital using the hospital consultant directory.

A validated survey instrument was not found in the literature, so a survey was developed by using examples from real cases that were published from the national patient safety agency “rapid response” notifications15 and from the authors’ own experiences (Table 1). A scale of 1–10 was used to rate the severity of the medical error that occurred in each of the eight scenarios: 1 would indicate no medical error occurring and 10 would indicate the most serious medical error possible. A follow-on question was added to each of the scenarios asking participants to justify their score. The term “error” was purposefully not defined as it was the authors’ intentions to assess agreement about its use as a free-standing word.

Table 1 Scenarios used in the questionnaire to assess differences in perceptions of severity of medical error

Trust management approval for the study was obtained through the Northampton General Hospital NHS Trust Research and Development Sub-Committee. They also confirmed that Ethics Committee approval was not necessary for this study. Consent was implied if participants completed the questionnaire.

The scores from each of the scenarios were combined to produce a summary score for each of the three groups and was analyzed using non-parametric tests using SPSS (IBM Corporation, version 19). The scenarios were created to include a variety of specialties, variety of individuals responsible (multiple individuals responsible versus one individual), and varying degrees to which the patient was harmed (according to the National Patient Safety Agency rating scale for patient harm)16 and according to the findings of a previous study.13 Before issuing the questionnaires, a pilot study was conducted for face validity (to assess content, clarity, and relevance). Ten participants consisting of final year students, junior doctors, and consultants were recruited and feedback from their responses was used to modify the questionnaire before the final version was issued.

Emails were sent to participants with a repeat email sent 3 weeks later to increase the response rates. The data were analyzed by immersion and crystallization17 to produce emerging insights from the qualitative data that supported each of the scores. One researcher read and re-read the data, organized phrases and segments of the written text, until themes emerged, and then re-analyzed the data for new themes. The second author also participated in analyzing any ambiguous statements and a series of discussions were undertaken in order to build a model and synthesize ideas. This was followed by a return to the medical literature and the data sources to look for both corroborating findings and alternative interpretations.17

Results

In total, 213 participants were recruited (Table 2). Of the 480 students that were emailed (319 Oxford students, 161 Leicester students), 76 students participated (15.8% response rate). A total of 914 junior doctors were invited to participate from Leicestershire, Northamptonshire, and Rutland foundation schools, and 73 junior doctors participated (response rate 8.2%) from Trent foundation school; of the 191 consultants emailed at Northampton General Hospital, 63 participated (response rate 33.0%).

Table 2 Gender and age distribution across groups

Gender differences were insignificant other than in the consultant cohort, where only 30.2% of the participants were female.

Analysis between cohorts

Scores for scenarios were non-normally distributed; therefore, non-parametric, Kruskal–Wallis tests were conducted to determine whether there were differences in the ratings of the scenarios between each of the cohorts. All tests were performed using α=0.05. (Table 3)

Table 3 Test statistics for Kruskal–Wallis tests (by professional group)

Notes: Bold text shows statistical significance (p<0.05). Scenarios are shown in Table 1.

Scenarios 1, 7, and 8 showed statistical significance. Post hoc Mann–Whitney tests showed that there was a significant difference in scores between medical students and consultants for each of these scenarios (with significant difference shown between junior doctors and medical students in scenario 1 only) (Table 4).

Table 4 Test statistics for Mann–Whitney tests

Notes: Bold text shows statistical significance (p<0.05). Scenarios are shown in Table 1.

The median of the combined scores (from the three cohorts) was calculated and ranked. These were compared to the outcomes of the scenarios (Table 5).

Table 5 Scenarios ranked according to median severity score with explanation of outcome

Note: Scenarios are shown in Table 1.

If scenario 6 was omitted from the ranking (as bedrail use in patients with dementia proved to be a contentious issue amongst participants), then the rankings suggest that outcome is used as a guide to judge the severity of error; where errors that have known harmful outcomes requiring intervention to correct them are ranked the highest and those scenarios where the error does not reach the patient (therefore avoiding harm) are judged to be of lower severity. Errors where their subsequent consequences are not fully known (such as scenario 8) are ranked the lowest.

Qualitative analysis of scenarios

The free-text answers were analyzed qualitatively to validate the abovementioned findings and to investigate other factors that contribute to severity scores. Five main themes emerged as factors which influenced participants’ decision-making regarding the severity of the error.

Scenario outcome

Participants from all three cohorts (medical students (MS), junior doctors (JD), and consultants) showed that the outcome of a scenario was a contributing factor to their decision regarding error severity. It often was a direct determinant of severity scoring, with higher scores associated with scenarios that were perceived to have more harmful consequences to the patient.

There were, however, differences in perception of outcome, even within the same scenario. “Clinical knowledge, participants’ interpretation of what constitutes harm, and participants’ perceptions of subsequent consequences” were sub-factors which contributed to perception of whether harm had occurred.

An example of variation in “clinical knowledge” can be observed in scenario 1 where a nasogastric feed was started without adequate checks of the position of the nasogastric tube. Some consultants and junior doctors stated that harm had occurred with further risk of deterioration, whereas other participants said that harm had been averted: “...Even though there has been no immediate deterioration, this child is still at very high risk of a serious pneumonia” (JD rated 7), compared to: “The naso-gastric tube should have been checked... however the error was noticed quickly, the feed was stopped and the issue resolved... No real harm came to the child” (JD rated 4).

There were also differences in “perception of what constitutes harm”. In scenario 8, the delay in histology results was perceived by some to be an inconvenience, whilst others perceived this as a source of psychological harm and anxiety for the patient while waiting for the results. “Inconvenient for patient and waste of money, but no harm done” (JD rated 3), compared to: “This is unlikely to cause physical harm to a patient, but is unprofessional and likely to cause mental harm...” (JD rated 3)

“Perception of subsequent consequences to the error” also contributed to the variation in severity scores. In scenario 4 where three times the volume of blood was administered to the child, some participants perceived this to result in very little clinical significance, whilst others envisaged that this might cause pulmonary edema, precipitate a sickle cell crisis, or cause cardiac failure.

“[T]oo much blood is still a better outcome than giving none” (JD rated 4), whilst another stated: “polycythaemia could have serious consequences, stroke, retinal problems etc...” (JD rated 7). This contrast was also seen in the other two cohorts.

Other less commonly stated outcome-related factors were:timeliness of error detection” (prompt action was associated with lower scores) and “error reversibility” (those who considered a situation to be difficult to reverse scored higher than participants who stated that a situation was easily reversible). However, some participants did not base their score according to outcome.

Potential consequences

Medical students and junior doctors gave more detailed comments regarding potential consequences compared to consultants. Emotional/psychological consequences were stated mainly by medical students, such as commenting on adverse effect the error in scenario 5 would have on mother–baby bonding. Consultants, in contrast, rarely mentioned emotional/psychological consequences but some highlighted legal consequences such as “risk of a law-suit” and “breach of the two-week rule” in scenario 3 (alluding to a policy in which patients with a suspected malignancy need to be seen by a hospital specialist within 2 weeks of referral by their general practitioner). 18 These were not mentioned by the other two cohorts.

“Potential consequences identification”, their “perceived severity” and “potential fatality” contributed to severity scores. In scenarios where the outcome was unknown, such as in scenario 8, many of the participants “identified potential consequences” (such as delay in treatment of a potential malignancy and potential increase in costs of treatment), which were associated with higher severity scores compared to those who stated that there would be no further consequences. In scenarios where the error had already harmed the patient, some participants considered alternative outcomes that could have occurred. For example, in scenario 3 where the elderly patient’s renal function returns to baseline following acute or chronic renal failure, many participants stated that the error “could” have resulted in worse consequences: “The patient was lucky not to be sent into worsening chronic renal failure on recovery” (MS rated 9), whilst others who implied that no further consequences would occur gave lower severity scores: “poor practice but no long term harm done” (consultant rated 5).

There were variances in “perceived severity of potential consequences”. For example, in scenario 7 where the patient could have potentially received the wrong medication, many participants stated there would have been severe adverse consequences had the error occurred: “Could have led to serious complications for patient. Checking the date of birth etc should be part of normal drug round” (consultant rated 10), whilst a minority stated that it would have been unlikely to be detrimental: “It is unlikely that either patient would have come to much harm, unless they happened to have a severe allergy to any of the drugs” (consultant rated 2). It is difficult to determine how consideration of potential consequences affected the scoring of severity as often this factor was mentioned amongst others, and there were a wide range of scores for the same potential consequence stated.

Often, there were conflicts in opinion between members of the same cohort regarding “potential fatality”. For example, in scenario 5 where a retained swab caused a perineal abscess: “Disaster. Potentially life threatening. Should know better” (MS rated 9), whilst another medical student stated: “again not potentially fatal but can cause long term problems” (MS rated 6). Potential fatality, however, was not always associated with high scores; in scenario 4, two consultants gave the same score but with contrasting views: “Potential serious harm. Unlikely to be fatal” (consultant rated 6) and “potentially fatal error” (consultant rated 6).

Latent factors

Identification of latent factors (underlying factors that contribute to an error being made) by participants also contributed to their perceptions of error severity. “Failure to follow established guidelines or protocols, lack of supervision of junior staff, expected level of knowledge, and system failures or checks” were underlying factors identified by participants and generally increased severity scores.

In scenarios 1, 3, 4, 5, and 7, “failure to follow established guidelines or protocols” was mostly identified by consultants. Participants often stated this factor as their reason for giving high severity scores. For example, in scenario 5, “avoidable error if had followed guidelines” (consultant rated 7) and “this should be avoided if correct protocols are followed by the surgeon and the theatre staff” (JD rated 9).

Consultants also commented on “lack of supervision” as a factor in some scenarios, and this was the basis for high scores amongst the consultant cohort (ranging between 7 and 9). In scenario 1, criticism of the senior nurse was made for not having supervised the student nurse during the procedure, whilst some medical students praised the senior nurse for her quick intervention. A lack of supervision was also commented on to a lesser extent in scenarios 2, 3, and 5.

“Expected level of knowledge” of an individual also affected participants’ responses. In scenario 2, some participants made excuses for the Foundation Year 1 (FY1) such as “new doctor” (MS rated 6) and “Juniors will make mistakes” (consultant rated 5). But many more participants, particularly medical students, expected an FY1 to have more knowledge, and this was associated with high severity scoring (rated either the median score or greater): “I do not know the correct dose, but as an FY1 that should be part of your knowledge and if not then you should check before prescribing as the potential outcome if not picked up may be disastrous” (MS rated 6) and “An FY1 should know these basic prescribing principles” (consultant rated 7).

In some scenarios, references to Reason’s Swiss cheese model1 were made (which portrays errors as “system failures”).1 Scenario 4 is one example, where the majority of such statements came from the junior doctors but were also stated to a lesser extent by the other two cohorts. “Thinking about the Swiss cheese model, there was a clear failure for the error to be picked up at numerous levels, despite them being in place...” (JD rated 9). “System failure” statements were made in other scenarios too, particularly by consultants, but this factor was not a main determinant of participants’ severity scoring as seen by wide ranges in scores and multiple reasons within the same answer. Additionally, participants were also able to identify when “system checks” had successfully prevented an error from reaching the patient. For example, in scenario 2: “There are several “layers” of protection following a prescription error such as this (senior doctor review, pharmacist review, administering nurse review and hourly blood sugar monitoring)” (consultant rated 5). Some participants noted, however, that such checking systems are “Good... but not something to rely on” for error prevention.

Error circumstances

Participants’ perceptions of whether a situation was to be considered a “medical” error or a “never event” contributed strongly to severity scores. Participant responses which stated “an error had occurred” were associated with higher scores compared to those who stated that no error occurred. For example, in scenario 7, some individuals gave their severity scoring based on the fact that an error had occurred, regardless of the outcome: “A drug error was made even if it was picked up by the patient it’s still an error” (JD rated 7). This factor was associated with higher severity scores (majority range between 6 and 8) than those who perceived no error had occurred (majority ranging between 1 and 3): “I wouldn’t say an error occurred” (JD rated 2). Likewise, participants who identified a scenario as a “never event” rated the severity higher than median: “this is a ‘never event’, which is why I scored it 9 instead of a lower score” (MS rated 9). This occurred in scenarios 1, 2, and 5.

“Perception of ease in performing or avoiding the error, error frequency, and number of individuals involved” also affected participants’ judgment of error severity but were less frequently stated factors compared to those aforementioned.

Participants who perceived an error as “easily made” rated the scenario lower than those who perceived the error as “easily avoidable”. For example, in scenario 7: “On a busy ward round drug charts can easily get moved around” (JD rated 2), compared to: “...A breach of ward procedure and easily avoidable by following the protocol” (consultant rated 9). Likewise, “frequency of an error” was a contributing factor, such as in scenario 8 where one consultant only responded with: “these things happen” (consultant rated 1).

There were differences in opinion regarding the “number of individuals at fault” in scenarios, but this was not reflected in a trend in the severity scoring. For example, in scenario 4: “I would consider this negligent and lazy by the prescribing Dr” (MS rated 8), compared to: “Potential serious consequences. Different mistakes made by people at different levels” (JD rated 8). In some situations, a minority of the consultants identified other individuals not mentioned in the scenario as responsible for the error, such as the endoscopist in scenario 3 who suggested the procedure should take place (this was not seen amongst the other two cohorts).

Participant empathy

“Participant empathy toward the health care workers” was associated with lower scores compared to critical participants. In scenario 3, many junior doctors related to the FY1 in the scenario: “Despite this being a significant error, I feel for the FY1, as this is something many junior doctors will do” (JD rated 7). In contrast, medical students were mainly critical: “It is the prescribing doctor’s responsibility to make sure that the drug that he/she prescribes is safe for the patient” (MS rated 9).

“Empathy toward a patient’s vulnerability”, however, was associated with higher severity scores. In scenario 1, some medical students and junior doctors stated the reason for their score was because: “Firstly this is a child, and things are just worse when they happen to children....” (JD rated 9), and likewise in scenario 5 another junior doctor stated: “...it’s hard enough being a new mum and then having to stay in hospital again for a mistake that shouldn’t have happened is even worse” (JD rated 8).

In some situations, however, multiple reasons stated by one participant for any given scenario make it difficult to attribute their score to any one particular factor. One such example is mentioned for illustration of this point. In scenario 5, one participant stated: “A never event. The outcome of this scenario could have been very different, for example, fistula formation. In addition, the mother’s morbidity could have affected her bonding with her newborn baby leading to post-natal depression or attachment issues between mother and child later in life” (JD rated 9).

Discussion

In our study, outcome and potential consequences of errors were the two main influences on severity scores across all the three cohorts. This supports existing evidence which shows that physicians are more likely to deem a situation as erroneous when harm has occurred, compared to a situation where the outcome was unknown.13 Other hospital staff view errors that are detected and corrected as “non-events” or a natural part of the work flow, rather than actual errors,19 and patients also adopt a “no harm, no foul” viewpoint.20,21 But in our study, there were a wide range of scores, even from those participants who just based their score on outcome. One contributing factor may be the large variation in definition of “medical error” by clinicians,13 which was also seen in our study where “no error” and “major error” were often used to describe the same scenario by different participants across the cohorts. Another contributing factor is the differences in interpretation of harm. Clinicians vary in their perception of what constitutes harm and what should be considered as mere inconvenience,14 which was also shown in our study in scenario 8, where there was discrepancy regarding whether the delayed histology results were to be considered as harmful or not.

Elsewhere, clinical experts have shown different patterns of reasoning compared with those expressed by novices or intermediates and organize their knowledge differently.2226 Their judgment is increasingly based on their previous experiences,23,25,26 and they have a wider view of the situation, a better grasp of the nature of particular clinical situations, including opportunities and constraints, and a more long-term focus.27 This was evident in scenarios 2 and 7 where some consultants and junior doctors stated that given the circumstances of the scenarios, even if the error had not been detected, the “likelihood” of severe harm to the patient would have been low, which was not mentioned by the medical student cohort. In addition, the consultants considered the legal implications of errors, which were not mentioned by the other two cohorts. However, there were also scenarios where there was wide variation shown in the interpretation of the clinical information even amongst the consultants, such as in scenario 4 where three times the appropriate volume of blood was transfused into an infant. This was considered as catastrophic by some consultants and hardly deleterious by others – a pattern also seen in the other two cohorts.

Rare occurrences have been shown to be more likely to be considered as an error compared to common ones, suggesting that physicians’ perceptions can become desensitized to medical errors if they occur on a regular basis.13 In scenarios 7 and 8, there was a lower median score for the consultant cohort in comparison to the medical students and junior doctors, suggesting a possible desensitization to such errors because of their frequency, as illustrated by the examples given in the ”Results” section. Another potential reason is that their clinical experience has given consultants a greater understanding of the “likelihood” of resultant harm should the error occur. In scenario 7, for example, some participants (the majority of whom were consultants) stated that even if the patient had received the wrong medication, the likelihood of major harm from a one-off dose would have been low. This may explain why consultants also rated scenario 1 higher than the other cohorts, perceiving the likelihood and severity of further harm to be great despite the nasogastric feed being quickly stopped.

Senior doctors have been shown to have a better understanding of how organizational factors influence patient safety than junior doctors6 which was also shown in the responses of many of the consultants in our study, who highlighted lack of supervision and lack of adherence to guidelines as underlying factors to many of the scenarios. It is possible that their years of clinical experience have given them a greater appreciation of the factors that contribute to errors and therefore are more able to identify such latent factors.

Our study showed that “system errors” such as in scenario 8 are less likely to be termed a “medical error” than errors where an individual is seen as responsible such as in scenario 1 and has also been shown in other physician responses.13 Such complex “system errors” tend to be viewed by physicians as “practice variances” and “suboptimal outcomes” rather than “error”.14 The same authors found that physicians are more likely to consider a situation as erroneous when one individual is involved compared to when an error occurs as a result of a team failing;9,13 however, our study showed no differences in score for participants who held one person accountable compared to multiple individuals. Knowledge of patient safety theory was evident amongst the junior doctors who often referred to the Swiss cheese model (particularly in scenario 4), and this has been shown elsewhere, possibly reflecting the relatively recent introduction of patient safety education into undergraduate curriculum.6

However, patient safety education is only a recent introduction and has not been universally applied.28 It is unsurprising, therefore, that studies have shown that both undergraduate and postgraduate doctors have limited knowledge on the subject.6,2830 However, clinicians’ perception of clinical error or “error wisdom” has been shown to be augmented through simple training in patient safety concepts,6 thus adding to the argument for more widespread patient-safety education. Though we did not ask participants about any previous educational experiences they had received regarding patient safety, our study showed that some medical students had a better grasp of patient safety concepts than others, and this was also reflected in the other two cohorts. Therefore, universal mandatory patient safety programs will enhance existing knowledge and if started from undergraduate curriculum and progressing into postgraduate education, this reinforces patient safety themes longitudinally30 and may help in changing patient safety culture within health care professionals in order to bring about real and effective change.5

Limitations

Study strengths included the novel approach used to gain insight into participants’ perceptions of erroneous scenarios and how these factors contribute to their perception of error-severity. However, our study was based on participants from the same region, therefore limiting the extent to which our findings can be generalized to other populations. In addition, there was a high non-response rate (particularly amongst junior doctors) and non-responders may have demonstrated alternative factors that would influence their severity scores, further limiting the generalizability of the results. Those who participated did so voluntarily, which can create a selection bias, and the administration of a survey itself may have influenced attitudes, and there may have been an element of satisfying bias. Conversely, as it was an online survey, participants were not given an opportunity for clarification of any misunderstandings in the wording of questions, and there was no psychometric testing of the scenarios.

The wide variety in scores within each scenario and the same reasons (eg, “potentially fatal”) translating into a wide range of scores may suggest that the scale range was too wide or that the scale was difficult in its application (eg, what exactly constitutes an 8 out of 10 and what constitutes a 5 out of 10). A Likert scale with a narrower range may be of benefit in subsequent studies. The free-text boxes, however, provided a useful insight into the reasoning behind the severity scores. As with all qualitative research, the worldview of the researchers inserts itself into the research.31 To counter this and to increase reliability, a second author was involved in discussions of emerging themes and the re-analysis of the data which adds rigor to our findings.

Conclusion

Heterogeneity in answers was seen within each of the cohorts, which reflects individual differences in consideration of the factors that contribute to error severity and the weighting they possess. The most influential factors were scenario outcome and potential consequences. Participants’ interpretation of how these would affect the patient involved differed, with more harmful and more potentially life-threatening scenarios receiving higher severity scores. However, latent factors, such as error circumstances and participant’s empathy, also contributed to a lesser extent. There were significant differences in the scores between medical students and consultants in some scenarios, which may be related to the clinical experience advantage that consultants have in determining the “likelihood” of occurrence of harm. The heterogeneity of answers amongst all three cohorts suggests that there is a need for mandatory patient safety programs to be implemented across all levels of clinical seniority in medical practice.

Acknowledgment

The authors would like to thank Ms Lyn Holmes (Cripps Medical Education Centre) and Ms Vicky Garrod (Clinical Skills Unit) at Northampton General Hospital for their support with this project.

Disclosure

The authors report no conflicts of interest in this work.

References

1.

Institute of Medicine (US) Committee on Quality of Health Care in America; Kohn LT, Corrigan JM, Donaldson MS, eds. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.

2.

Vincent C, Neale G, Woloshynowych M. Adverse events in British hospitals: preliminary retrospective record review. BMJ. 2001;322:517–519.

3.

Vincent C. Risk, safety, and the dark side of quality. BMJ. 1997;314:1775–1776.

4.

Francis R. Report of the Mid Staffordshire NHS Foundation Trust Public Enquiry. Vol 1. Analaysis of evidence and lessons learned (part 1). Available from: http://www.midstaffspublicinquiry.com/sites/default/files/report/Volume%201.pdf. Accessed June 27, 2013.

5.

Leape LL, Woods DD, Hatlie MJ, Kizer KW, Schroeder SA, Lundberg GD. Promoting patient safety by preventing medical error. JAMA. 1998;280:1444–1447.

6.

Durani P, Dias J, Singh HP, Taub N. Junior doctors and patient safety: evaluating knowledge, attitudes and perception of safety climate. BMJ Qual Saf. 2013;22:65–71.

7.

Mayo AM, Duncan D. Nurse perceptions of medication errors what we need to know for patient safety. J Nurs Care Qual. 2004;19:209–217.

8.

Ferlie EB, Shortell SM. Improving the quality of health care in the United Kingdom and the United States: a framework for change. Milbank Q. 2001;79:281–315.

9.

Muller D, Ornstein K. Perceptions of and attitudes towards medical errors among medical trainees. Med Educ. 2007;41:645–652.

10.

Kerfoot PB, Conlin PR, Travison T, McMahon GT. Patient safety knowledge and its determinants in medical trainees. J Gen Intern Med. 2007; 22:1150–1154.

11.

Osborne J, Blais K, Hayes JS. Nurses’ perceptions: when is it a medication error? J Nurs Adm.1999;29:33–38.

12.

Garbutt J, Brownstein DR, Klein EJ, Waterman A, Krauss MJ, Marcuse EK. Reporting and disclosing medical errors: pediatricians’ attitudes and behaviors. Arch Pediatr Adolesc Med. 2007; 161:179–185.

13.

Elder NC, Pallerla H, Regan S. What do family physicians consider an error? A comparison of definitions and physician perception. BMC Fam Pract. 2006;7:73.

14.

Elder NC, Meulen MV, Cassedy A. The Identification of medical errors by family physicians during outpatient visits. Ann Fam Med. 2004;2:125–129.

15.

National Patient Safety Agency. Alerts. Available from: http://www.nrls.npsa.nhs.uk/alerts/. Accessed April 30, 2013.

16.

National Patient Safety Agency. Medical Error. What to Do if Things Go Wrong: A Guide for Junior Doctors. London: National Patient Safety Agency; 2010.

17.

Crabtree B, Miller W, editors. Doing Qualitative Research. 2nd ed. London: Sage; 1999.

18.

Jones R, Rubin G, Hungin P. Is the two week rule for cancer referrals working? BMJ. 2001;322:1555–1556.

19.

Tamuz M, Thomas EJ, Franchois KE. Defining and classifying medical error: lessons for patient safety reporting systems. Qual Saf Health Care 2004;13:13–20.

20.

Espin S, Levinson W, Regehr G, Baker R, Lingard L. Error or “act of God”? A study of patients’ and operating room team members’ perceptions of error definition, reporting, and disclosure. Surgery. 2006;139:6–14.

21.

National Patient Safety Foundation. Public Opinion of Patient Safety Issues. Chicago, IL: National Patient Safety Foundation; 1997.

22.

Charlin B, Boshuizen HP, Custers EJ, Feltovich PJ. Scripts and clinical reasoning. Med Educ. 2007;41:1178–1184.

23.

Elstein AS, Shulman LS, Sprafka SA. Medical Problem Solving. An analysis of Clinical Reasoning. Cambridge: Massachusetts Harvard University Press; 1979.

24.

Norman G, Young M, Brooks L. Non-analytical models of clinical reasoning: the role of experience. Med Educ. 2007;41:1140–1145.

25.

Patel VL, Groen CJ, Patel YC. Cognitive aspects of clinical performance during patient workup: the role of medical expertise. Adv Health Sci Educ Theory Pract 1997;2:95–114.

26.

Schmidt HG, Rikers RM. How expertise develops in medicine: knowledge encapsulation and illness script formation. Med Educ. 2007;41: 1133–1139.

27.

Nilsson MS, Pilhammar E. Professional approaches in clinical judgements among senior and junior doctors: implications for medical education. BMC Med Educ. 2009;9:25.

28.

Nie Y, Li L, Duan Y, et al. Patient safey education for undergraduate medical students: a systematic review. BMC Med Educ. 2011;11:30

29.

Madigosky WS, Headrick LA, Nelson K, Cox KR, Anderson T. Changing and sustaining medical students’ knowledge, skills, and attitudes about patient safety and medical fallibility. Acad Med. 2006; 81:94–101.

30.

Patey R, Flin R, Cuthbertson BH, et al. Patient safety: helping medical students understand error in healthcare. Qual Saf Health Care. 2007;16:256–259.

31.

Phillips S, Clarke C. More than an education: the hidden curriculum, professional attitudes and career choice. Med Educ. 2012;46:887–893.

Creative Commons License © 2018 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.