Back to Journals » Clinical Epidemiology » Volume 8

Interrater reliability of a national acute myocardial infarction register

Authors Govatsmark R, Sneeggen S, Karlsaune H, Slørdahl S, Bønaa KH

Received 6 February 2016

Accepted for publication 9 May 2016

Published 17 August 2016 Volume 2016:8 Pages 305—312

DOI https://doi.org/10.2147/CLEP.S105933

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 5

Editor who approved publication: Professor Henrik Sørensen



Ragna Elise Støre Govatsmark,1,2 Sylvi Sneeggen,2 Hanne Karlsaune,2 Stig Arild Slørdahl,2 Kaare Harald Bønaa,1–3

1Department of Public Health and General Practice, Norwegian University of Science and Technology, 2Department of Medical Quality Registries, 3Clinic for Heart Disease, St. Olav’s University Hospital, Trondheim, Norway

Background: Disease-specific registers may be used for measuring and improving healthcare and patient outcomes, and for disease surveillance and research, provided they contain valid and reliable data. The aim of this study was to assess the interrater reliability of all variables in a national myocardial infarction register.
Methods: We randomly selected 280 patients who had been enrolled from 14 hospitals to the Norwegian Myocardial Infarction Register during the year 2013. Experienced audit nurses, who were blinded to the data about the 280 patients already in the register, completed the Norwegian Myocardial Infarction paper forms for 240 patients by review of medical records. We then extracted all registered data on the same patients from the Norwegian Myocardial Infarction Register. To compare the interrater reliability between the register and the audit nurses, we calculated intraclass correlations coefficient for continuous variables, Cohen’s kappa and Gwet’s first agreement coefficient (AC1) for nominal variables, and quadratic weighted Cohen’s kappa and Gwet’s second AC for ordinal variables.
Results: We found excellent (AC1 >0.80) or good (AC1 0.61–0.80) agreement for most variables, including date and time variables, medical history, investigations and treatments during hospitalization, medication at discharge, and ST-segment elevation or non-ST-segment elevation acute myocardial infarction. However, only moderate agreement (AC1 0.41–0.60) was found for family history of coronary heart disease, diagnostic electrocardiography, and complications during hospitalization, whereas fair agreement (AC1 0.21–0.40) was found for acute myocardial infarction location. A high percentage of missing data was found for symptom onset, family history, body mass index, infarction location, and new Q-wave.
Conclusion: Most variables in Norwegian Myocardial Infarction Register had excellent or good reliability. However, some important variables had lower reliability than expected or had missing data. Precise definitions of data elements and proper training of data abstractors are necessary to ensure that clinical registries contain valid and reliable data.

Keywords:
medical registers, data quality, medical quality register

Introduction

There has been an increasing emphasis during the past decades on measuring and improving the quality and efficiency of medical care.14 Hence, there has been a proliferation of clinical registries designed to understand care and outcomes in clinical medicine as it is practiced.58 Both administrative and disease-specific registries are used as a data source for healthcare quality evaluation, disease surveillance, and clinical and epidemiologic research.1,2 However, there are challenges in obtaining valid and reliable data, and studies have raised questions about the reliability of data extracted from medical records.912 The quality of clinical registries may be hampered by many factors, including inadequate abstractor training, inadequate standardization of data elements, nonstandard terminology, as well as feasibility constraints due to the administrative burden of obtaining demographic and clinical data.1315

A number of registries are collecting data from patients hospitalized with acute coronary events.11,1624 Studies that investigated the validity of acute myocardial infarction (AMI) registries typically focused on calculating measures of completeness.1925 Few studies have investigated the reliability of selected key variables in the registries.8,25,26 Radovanovic and Erne reported good interrater reliability (kappa scores >0.8) for baseline characteristics and therapeutic interventions, whereas the Swedish register of acute ischemic heart disease (RIKS-HIA) reported that error rates varied between 23% for electrocardiographic (ECG) findings and <5% for discharge medication and discharge destination after hospitalization.8,26 Publication of a detailed audit of register data that specifies the major fields in use is necessary to substantiate clinical register quality and identify areas for improvement in data quality.

Since January 1, 2012, all Norwegian hospitals are requested by law to report medical data on all patients hospitalized with an AMI to the Norwegian Myocardial Infarction Register.27,28 In the present study, we assessed the reliability of all the variables in the Norwegian Myocardial Infarction Register by studying interrater reliability in a random sample of 280 patients.

Methods

The Norwegian Myocardial Infarction Register

The register is a web-based medical register system and provides person-identifiable information on a total of 107 variables covering the dates and exact times for the onset of symptoms, hospitalization and discharge, risk factors for coronary heart disease, medical history, clinical findings and symptoms, ECG findings, blood levels of troponins, echocardiographic findings, and the use of drugs and other treatments.28 Patients transferred between hospitals must be registered by all hospitals that treated the patient during the event. Information about previous diseases and treatments prior to hospitalization is compulsory to register only at the first hospital.

The register has a standardized case record form. Hospitals have different registration procedures, however. In some hospitals, doctors use paper forms, and a dedicated nurse or a secretary subsequently enter the data to the register by use of a web-based form. In other hospitals, nurses start registering directly into the web-based form during hospitalization, and a dedicated nurse completes the registration after the patient is discharged by abstracting data from electronic medical records. AMI cases may be identified by ward personnel who regularly review hospitalized patients in the wards or by review of the discharge diagnoses in the hospital patient administration systems. A user manual provides definitions of the variables and data entries.28 This study is pursuant to regulation of the Norwegian Register of Cardiovascular Diseases from 2012, §2-2: Responsibilities for correct information; the National Institute of Public Health shall ensure that the data processed in the register is correct, relevant, and necessary. Therefore, patient consents were not required. The study was approved by the National Institute of Public Health and the Norwegian Directorate for Health.

Data collection

In 2013, a total of 49 of 54 Norwegian hospitals that treated patients with an AMI reported their data to the Norwegian Myocardial Infarction Register. We used a stratified design to obtain a representative sample of 15 hospitals for the present study. Hospitals were stratified according to the following three categories: 1) large, mainly university hospitals providing services in interventional cardiology (n=7 hospitals); 2) middle-sized hospitals treating >150 AMI patients per year (n=22 hospitals); and 3) smaller hospitals treating 50–149 AMI patients per year (n=17 hospitals). Three hospitals that treated <50 AMI patients per year were not included in the study. For the present study, we randomly selected three university hospitals (Stavanger University Hospital, Haukeland University Hospital, Bergen, and St. Olavs University Hospital, Trondheim), nine middle-sized hospitals (located in the cities of Hamar, Molde, Bærum, Fredrikstad, Levanger, Lovisenberg, Ålesund, Haraldsplass, and Skien), and three smaller hospitals (located in Førde, Lofoten, and Kirkenes). All four Norwegian Regional Health Authorities were represented in the study roughly according to population size. Only one of the selected hospitals (Fredrikstad) was unable to participate. From each of the remaining 14 participating hospitals, we then randomly selected 20 patients registered in the Norwegian Myocardial Infarction Register during the year 2013. The number of patients selected from each hospital was determined based on how many cases could be reviewed during a 2-day site visit. Our material contains information of 280 patients.

To investigate the interrater reliability of the Norwegian Myocardial Infarction Register, replicate registrations were performed by two experienced nurses working at the national coordinating center for the register. The data collection was done during November 2014–March 2015. Only one nurse visited each of the 14 participating hospitals. The nurses were given access to all relevant information, including results of diagnostic tests, examinations, laboratory tests, as well as all medical notes stored in the patients electronic medical record. The nurses filled in the paper form of the Norwegian Myocardial Infarction Register blinded for the data already in the register. The data were later entered into an electronic database and the registrations done by the nurses were compared with the original registrations performed by the hospitals.

This study is pursuant to regulation of the register of cardiovascular diseases from 2012, §2-2: Responsibilities for correct information – the National Institute of Public Health shall ensure that the data processed in the register is correct, relevant, and necessary.27

Statistical analysis

The sample size was determined on the basis of recommended sample size calculations for the kappa statistic. The goodness-of-fit approach states that based on alpha and beta error rates of 0.05 and 0.2, respectively, when testing for a statistical difference between moderate (0.40) and excellent (0.90) kappa values, sample size estimates range from 13 to 66.29 Our study of 280 patients is thus well powered for detecting robust estimates of interrater reliability.

Interrater reliability of the Norwegian Myocardial Infarction Register was estimated by comparing the original data entered into the register by the hospitals with the data entered into the register by the audit nurses. Reliability for continuous variables was estimated by calculating intraclass correlation coefficients using a two-way random effects analysis of variance model with type of absolute agreement.29 To show the magnitude of disagreement, we calculated the mean and the standard deviation of the differences between the hospital and the audit abstractors.

For nominal variables, we used both Cohen’s kappa and Gwet’s AC1 (the first-order agreement coefficient) with 95% confidence intervals.3035 For ordinal variables, we used the quadratic weighted kappa and Gwet’s AC2. The response category “unknown” is an optional response category for nominal variables and was therefore included in the total for nominal variables. Missing data was excluded for all type of variables. Time variables were recalculated to numeric variables as minutes after midnight when the corresponding date variable was the same for both the nurse and the register. Discharge dates were recalculated as the number of days after December 31, 2012.

Kappa, Gwet’s AC1/AC2, and intraclass correlation coefficient with values ≤0.20 are interpreted as poor agreement, 0.21–0.40 as fair agreement, 0.41–0.60 as moderate ­agreement, 0.61–0.80 as good agreement, and values above 0.80 as excellent agreement.36 If the ratings are unbalanced, that is, with nearly all ratings positive or all ratings negative, kappa will be highly sensitive to small departures from perfect concordance. Kappa is also sensitive to rater bias when there is a systematic difference between raters in their tendency to make a particular rating.30,32 Gwet’s AC1 and AC2, however, are not affected by trait prevalence or rater bias.33,34 Variables with discrepancy between the kappa and Gwet’s AC1/AC2 statistics were interpreted as reliable if kappa was low, and observed agreement and Gwet’s AC1/AC2 were high. To aid the interpretation of the kappa and Gwet’s AC1/AC2, cross tables for all presented variables are included in Tables S1S4 include cross tables for variables not presented in the article.

Data were analyzed using IBM SPSS 22.0 (IBM Corporation, Armonk, NY, USA) and AgreeStat 2015.4 (Advanced Analytics, LLC, Gaithersburg, MD, USA).

Results

The sample of 280 patients consisted of 63.2% males. The mean age was 72.9 years (standard deviation 13.6). In comparison, the total population in the Norwegian Myocardial Infarction Register in 2013 (n=12,336 patients) consisted of 64.3% male and the mean age was 71.0 years.

Table 1 presents interrater reliability for medical history, medication prior to hospitalization, and data on admission to the first hospital that treated the patient during the event. Most variables showed good or excellent agreement. Classification of the diagnostic ECG (eight categories, Table S1) showed moderate reliability with observed agreement in 62.6% of the cases. Excellent reliability was found for the variable ST-elevation myocardial infarction/non-ST-elevation myocardial infarction (STEMI/NSTEMI) where the AMI was classified as non-ST-segment or ST-segment elevation AMI. Information on family history was coded as unknown in almost 50% of the cases, and reliability was low with observed agreement in only 59.1% of the cases.

Table 1 Interrater reliability for medical history, medication prior to hospitalization, symptoms, and admission data of the Norwegian Myocardial Infarction Register


Notes: aObserved agreement calculated as concordant answers divided by n; bAdenosine diphosphate receptor antagonist; cAngiotensin-converting-enzyme inhibitor; dAngiotensin II receptor antagonist.


Abbreviations: AC1, first agreement coefficient; CI, confidence interval; ECG, echocardiogram; NSTEMI, non-ST-elevation myocardial infarction; STEMI, ST-elevation myocardial infarction.

Table 2 presents interrater reliability regarding drug treatment, diagnostic and treatment procedures, and complications during hospitalization. Excellent reliability estimates were found for drug treatments except for the use of angiotensin-converting-enzyme inhibitor/angiotensin II receptor antagonist and diuretics. Agreement was excellent regarding the use of troponin I or troponin T as a biomarker for myocardial necrosis. However, agreement on whether there had been an increase and/or fall in troponin levels during hospitalization was observed in only 76.0% of the cases. Excellent reliability was found regarding whether coronary angiography, percutaneous revascularization, or echocardiography had been performed during hospitalization, and for echocardiographic estimates of left ventricular ejection fraction (three categories, Table S2). Agreement on whether heart failure or any complication occurred during hospitalization was observed in 80.4% and 72.1% of the cases, respectively.

Table 2 Interrater reliability of drug treatment, diagnostic and treatment procedures, and complications during hospitalization of the Norwegian Myocardial Infarction Register


Notes: aObserved agreement calculated as concordant answers divided by n; bAdenosine diphosphate receptor antagonist; cAngiotensin-converting-enzyme inhibitor; dAngiotensin II receptor antagonist; eWeighted kappa and second agreement coefficient (AC2). The category “unknown” is excluded. fVariables concerning complication are collected into one variable.


Abbreviations: AC1, first agreement coefficient; CI, confidence interval.

For variables such as drug treatment during hospitalization and complication, we found low kappa despite high observed agreement and AC1/AC2. Kappa is more prone to rater bias and skewed high prevalence than the AC1/AC2. Variables with low kappa, high observed agreement, and AC1/AC2 were considered as reliable variables. The discrepancy between the estimates in these variables was considered to be skewed prevalence.

Table 3 shows that agreement on AMI location was fair and observed in only 51.3% of the cases. Agreement on the occurrence of a new Q-wave in ECG was observed in 74.2% of the cases. For both variables, the response category “unknown” was used frequently by the hospitals and/or by the audit nurses. Excellent agreement was found for medication at discharge, death during hospitalization, and discharge destination (five categories, Table S3).

Table 3 Interrater reliability of myocardial infarction location, type of infarction, medication at discharge, and discharge status of the Norwegian Myocardial Infarction Register


Notes: aObserved agreement calculated as concordant answers divided by n; bPatients who died during hospitalization were excluded; cAdenosine diphosphate receptor antagonist; dAngiotensin-converting-enzyme inhibitor; eAngiotensin II receptor antagonist.


Abbreviations: AC1, first agreement coefficient; CI, confidence interval; ECG, echocardiogram.

Time for symptom onset, and arrival and discharge showed excellent agreement (Table 4). Onset time had many missing values, however. Similarly, information on body mass index was missing in >50% of the cases. Good agreement was found for the minimum troponin level, whereas excellent agreement was found for the maximum troponin level, creatinine, glucose, and blood lipid levels.

Table 4 Interrater reliability for continuous variables registered at the first hospital that treated the patient during an acute myocardial infarction event of the Norwegian Myocardial Infarction Register


Notes: aNumber of cases with registrations for both the raters. bCalculated as hospital abstractors minus audit nurses. cStandard deviation of the difference. dIncluded patients transferred from another hospital.


Abbreviations: CI, confidence interval; HDL, high density lipoprotein; ICC, intraclass correlations coefficient.

Discussion

Data in medical registers should be correct and complete if the registers are to be used for measuring and improving the quality of medical care.24 We found that most of the variables in the Norwegian Myocardial Infarction Register had excellent or good agreement, including date and time variables, medical history, investigations and diagnostic procedures during hospitalization, medication, and discharge destination. Only moderate agreement was found for family history of coronary heart disease, diagnostic ECG and complications during hospitalization, and agreement was fair for AMI location. Missing data was frequent for symptom onset time, family history of coronary heart disease, body mass index, infarction location, and new Q-wave.

Reliable data on time variables and ECG are important for assessing the quality of the initial treatment in patients with AMI. As in the Swedish register of acute ischemic heart disease (RIKS-HIA), we found high disagreement between the abstractors in the diagnostic ECG.26 In our study, the audit nurses coded the ECG as it was described in the medical records. This routine was also used by many of the hospitals. The diagnostic ECG had eight categories, which could be difficult to distinguish by using only the description in the medical records. To ensure higher quality of registration of ECG, we suggest that ECG registrations are done by personnel who can make an independent assessment of the ECG and not by persons who must rely on imprecise ECG descriptions in medical records. We found excellent reliability for the STEMI/NSTEMI variable. This is in line with the high agreement in the registration of STEMI/NSTEMI that has been observed between different hospitals treating the same patient during an AMI event.37

Several variables, such as family history of coronary heart disease and AMI location, had a high number of unknown registrations and missing values and low agreement. This may be because the abstractors did not know the definition of a family history and/or because medical records contained imprecise information. For smoking status, the audit nurses had less unknown registrations than the hospitals, which could be due to more knowledge and time to find the correct information in the medical records. However, audit nurses registered overall fewer complications than the hospital abstractors, which may be due to lack of information in the medical records. Moderate agreement in heart failure as a complication could be due to ambiguous definition in the user manual.

The variables body mass index and symptom onset were not mandatory to register and the number of missing values was high. For symptom onset time, the audit nurses had more missing values than the hospital abstractors, probably because onset time was incompletely documented in the medical records. Other studies have shown low agreement or high number of missing values or inconsistent recording of symptom onset time in medical records.38,39 Symptom onset time is essential for determining patient delay and for the assessment of whether the initial treatment strategy was according to guidelines. Rosamond et al recommend including a structured scheme in the medical record where prehospital delay and critical times for all patients with stroke-like symptoms must be documented.40 They suggest categorizing onset time into morning, afternoon, evening, and overnight if exact onset time is unknown.40 A structural symptom onset time scheme could also be included in the medical records for patients with AMI. Our results indicate that registrations, while the patient is hospitalized, are preferred because of better access to patient data.39 Ideally, registration directly into the electronic register would be preferred because of online validation and user guidance for each variable.

This study has several limitations. First, difference in the data collection methods between the hospital abstractors and the audit nurse may have affected the results. Second, we had insufficient resources to register an adequate number of patients at each hospital. We could therefore not compare the results among the hospitals. Third, we had no gold standard for correct registrations.

Conclusion

We found that most of the variables in a national myocardial infarction register had good or excellent interobserver reliability. For certain variables, however, the definitions and coding categories should be revised to improve validity. Precise definitions of data elements and proper training of data abstractors are necessary to ensure that clinical registries contain valid and reliable data.

Acknowledgments

The research for this paper was financially supported by the Liaison Committee between the Central Norway Regional Health Authority (RHA) and the Norwegian University of Science and Technology (NTNU). Thanks to Anniken Karlson Kristiansen from St. Olavs University Hospital, Trondheim, Norway for her assistance in data collection and to Tormod Digre from St. Olavs University Hospital, Trondheim, Norway for his help to facilitate data for randomization. Stig A Slørdahl is now CEO of the Central Norway Regional Health Authority.

Disclosure

The authors report no conflicts of interest in this work.

References

1.

Bhatt DL, Drozda JP, Shahian DM, et al. ACC/AHA/STS statement on the future of registries and the performance measurement enterprise: a report of the American College of Cardiology/American Heart Association Task Force on Performance Measures and the Society of Thoracic Surgeons. Ann Thorac Surg. 2015;100(5):1926–1941.

2.

Bufalino VJ, Masoudi FA, Stranne SK, et al. The American Heart Association’s recommendations for expanding the applications of existing and future clinical registries: a policy statement from the American Heart Association. Circulation. 2011;123(19):2167–2179.

3.

McNamara RL. Cardiovascular registry research comes of age. Heart. 2010;96(12):908–910.

4.

Solomon DJ, Henry RC, Hogan JG, Van Amburg GH, Taylor J. Evaluation and implementation of public health registries. Public Health Rep. 1991;106(2):142–150.

5.

Herrett E, Smeeth L, Walker L, Weston C; MINAP Academic Group. The Myocardial Ischaemia National Audit Project (MINAP). Heart. 2010;96(16):1264–1267.

6.

Investigators G. Rationale and design of the GRACE (Global Registry of Acute Coronary Events) Project: a multinational registry of patients hospitalized with acute coronary syndromes. Am Heart J. 2001;141(2):190–199.

7.

Koster M, Asplund K, Johansson A, Stegmayr B. Refinement of Swedish administrative registers to monitor stroke events on the national level. Neuroepidemiology. 2013;40(4):240–246.

8.

Radovanovic D, Erne P. AMIS Plus: Swiss registry of acute coronary syndrome. Heart. 2010;96(12):917–921.

9.

Boyd NF, Pater JL, Ginsburg AD, Myers RE. Observer variation in the classification of information from medical records. J Chron Dis. 1979;32:327–332.

10.

Demlo LK, Campbell PM, Brown SS. Reliability of information abstracted from patients’ medical records. Med Care. 1978;16(12):995–1005.

11.

Ferreira-Gonzalez I, Marsal JR, Mitjavila F, et al. Patient registries of acute coronary syndrome: assessing or biasing the clinical real world data? Circ Cardiovasc Qual Outcomes. 2009;2(6):540–547.

12.

Xian Y, Fonarow GC, Reeves MJ, et al. Data quality in the American Heart Association Get With The Guidelines-Stroke (GWTG-Stroke): results from a national data validation audit. Am Heart J. 2012;163(3):392–398.

13.

Arts DG, De Keizer NF, Scheffer GJ. Defining and improving data quality in medical registries: a literature review, case study, and generic framework. J Am Med Inform Assoc. 2002;9(6):600–611.

14.

Prins H, Hasman A. Appropriateness of ICD-coded diagnostic inpatient hospital discharge data for medical practice assessment. A systematic review. Methods Inf Med. 2013;52(1):3–17.

15.

Sorensen HT, Sabroe S, Olsen J. A framework for evaluation of secondary data sources for epidemiological research. Int J Epidemiol. 1996;25(2):435–442.

16.

Register of Information and Knowledge about Swedish Heart Intensive Care Admission (RIKS-HIA). Available from: http://www.ucr.uu.se/swedeheart/index.php/start-riks-hia [updated May 5, 2016].

17.

National Institute for Cardiovascular Outcomes research (NICOR). Available from: https://www.ucl.ac.uk/nicor/about [updated May 5, 2016].

18.

National Registry of Acute Myocardial Infarction in Switzerland (AMIS Plus). Available from: http://www.amis-plus.ch/; [updated May 5, 2016].

19.

Aspberg S, Stenestrand U, Koster M, Kahan T. Large differences between patients with acute myocardial infarction included in two Swedish health registers. Scand J Public Health. 2013;41(6):637–643.

20.

Austin PC, Daly PA, Tu JV. A multicenter study of the coding accuracy of hospital discharge administrative data for patients admitted to cardiac care units in Ontario. Am Heart J. 2002;144(2):290–296.

21.

Herrett E, Shah AD, Boggon R, et al. Completeness and diagnostic validity of recording acute myocardial infarction events in primary care, hospital care, disease registry, and national mortality records: cohort study. BMJ. 2013;346:f2350.

22.

Madsen M, Davidsen M, Rasmussen S, Abildstrom SZ, Osler M. The validity of the diagnosis of acute myocardial infarction in routine statistics: a comparison of mortality and hospital discharge data with the Danish MONICA registry. J Clin Epidemiol. 2003;56(2):124–130.

23.

Mahonen M, Salomaa V, Brommels M, et al. The validity of hospital discharge register data on coronary heart disease in Finland. Eur J Epidemiol. 1997;13(4):403–415.

24.

Pajunen P, Koukkunen H, Ketonen M, et al. The validity of the Finnish Hospital Discharge Register and Causes of Death Register data on coronary heart disease. Eur J Cardiovasc Prev Rehabil. 2005;12(2):132–137.

25.

Messenger JC, Ho KK, Young CH, et al. The National Cardiovascular Data Registry (NCDR) Data Quality Brief: the NCDR Data Quality Program in 2012. J Am Coll Cardiol. 2012;60(16):1484–1488.

26.

SWEDEHEART. Annual report RIKS-HIA 2007. 2008.

27.

Forskrift om innsamling og behandling av helseopplysninger i Nasjonalt register over hjerte- og karlidelser. Available from: https://lovdata.no/dokument/SF/forskrift/2011-12-16-1250 [updated May 5, 2016].

28.

The Norwegian Myocardial Infarction Register Available from: http://www.hjerteinfarktregisteret.no/no/Helsepersonell/Bukermanual/123918/ [updated May 5, 2016].

29.

Donner A, Eliasziw M. A goodness-of-fit approach to inference procedures for the kappa statistic: confidence interval construction, significance-testing and sample size estimation. Stat Med. 1992;11(11):1511–1519.

30.

Cicchetti DV, Feinstein AR. High agreement but low kappa: II. Resolving the paradoxes. J Clin Epidemiol. 1990;43(6):551–558.

31.

Cohen J. A coefficient of agreement for nominal scales. Educ Psychol Meas. 1960;20(1):37–46.

32.

Feinstein AR, Cicchetti DV. High agreement but low kappa: I. The problems of two paradoxes. J Clin Epidemiol. 1990;43(6):543–549.

33.

Gwet KL. Inter-rater reliability: dependency on trait prevalence and marginal homogeneity. Statist Meth Inter-Rater Reliab Assess. 2002;2:1–9.

34.

Gwet KL. Computing inter-rater reliability and its variance in the presence of high agreement. Br J Math Stat Psychol. 2008;61(Pt 1):29–48.

35.

McGraw KO, Wong SP. Forming inferences about some intraclass correlation coefficients. Psychol Methods. 1996;1:30–46.

36.

Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–174.

37.

The Norwegian Myocardial Infarction Register, Annual report 2014. Available from: http://www.hjerteinfarktregisteret.no/no/Rapporter/123785/ [updated May 5, 2016].

38.

Reeves MJ, Arora S, Broderick JP, et al. Acute stroke care in the US: results from 4 pilot prototypes of the Paul Coverdell National Acute Stroke Registry. Stroke. 2005;36(6):1232–1240.

39.

Reeves MJ, Mullard AJ, Wehner S. Inter-rater reliability of data elements from a prototype of the Paul Coverdell National Acute Stroke Registry. BMC Neurol. 2008;8:19.

40.

Rosamond WD, Reeves MJ, Johnson A, Evenson KR, Paul Coverdell National Acute Stroke Registry Prototype I. Documentation of stroke onset time: challenges and recommendations. Am J Prev Med. 2006;31(6 Suppl 2):S230–S234.

Creative Commons License © 2016 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.