Back to Journals » Journal of Multidisciplinary Healthcare » Volume 9
Information bias in health research: definition, pitfalls, and adjustment methods
Authors Althubaiti A
Received 22 January 2016
Accepted for publication 8 March 2016
Published 4 May 2016 Volume 2016:9 Pages 211—217
Checked for plagiarism Yes
Review by Single anonymous peer review
Peer reviewer comments 2
Editor who approved publication: Dr Scott Fraser
Department of Basic Medical Sciences, College of Medicine, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia
Abstract: As with other fields, medical sciences are subject to different sources of bias. While understanding sources of bias is a key element for drawing valid conclusions, bias in health research continues to be a very sensitive issue that can affect the focus and outcome of investigations. Information bias, otherwise known as misclassification, is one of the most common sources of bias that affects the validity of health research. It originates from the approach that is utilized to obtain or confirm study measurements. This paper seeks to raise awareness of information bias in observational and experimental research study designs as well as to enrich discussions concerning bias problems. Specifying the types of bias can be essential to limit its effects and, the use of adjustment methods might serve to improve clinical evaluation and health care practice.
Keywords: self-report bias, social desirability bias, recall bias, misclassification, measurement error bias, confirmation bias
Bias can be defined as any systematic error in the design, conduct, or analysis of a study. In health studies, bias can arise from two different sources; the approach adopted for selecting subjects for a study or the approach adopted for collecting or measuring data from a study. These are, respectively, termed as selection bias and information bias.1 Bias can have different effects on the validity of medical research findings. In epidemiological studies, bias can lead to inaccurate estimates of association, or over- or underestimation of risk parameters. Allocating the sources of bias and their impacts on final results are key elements for making valid conclusions. Information bias, otherwise known as misclassification, is one of the most common sources of bias that affects the validity of health research. It originates from the approach that is utilized to obtain or confirm study measurements. These measurements can be obtained by experimentation (eg, bioassays) or observation (eg, questionnaires or surveys).
Medical practitioners are conscious of the fact that the results of their investigation can be deemed invalid if they do not account for major sources of bias. While a number of studies have discussed different types of bias,2–4 the problem of bias is still frequently ignored in practice. Often bias is unintentionally introduced into a study by researchers, making it difficult to recognize, but it can also be introduced intentionally. Thus, bias remains a very sensitive issue to address and discuss openly. The aim of this paper is to raise the awareness of three specific forms of information bias in observational and experimental medical research study designs. These are self-reporting bias, and the often-marginalized measurement error bias, and confirmation bias. We present clear and simple strategies to improve the decision-making process. As will be seen, specifying the type of bias can be essential for limiting its implications. The “Self-reporting bias” section discusses the problem of bias in self-reporting data and presents two examples of self-reporting bias, social desirability bias and recall bias. The “Measurement error bias” section describes the problem of measurement error bias, while the “Confirmation bias” section discusses the problem of confirmation bias.
Self-reporting is a common approach for gathering data in epidemiologic and medical research. This method requires participants to respond to the researcher’s questions without his/her interference. Examples of self-reporting include questionnaires, surveys, or interviews. However, relative to other sources of information, such as medical records or laboratory measurements, self-reported data are often argued to be unreliable and threatened by self-reporting bias.
The issue of self-reporting bias represents a key problem in the assessment of most observational (such as cross-sectional or comparative, eg, case–control or cohort) research study designs, although it can still affect experimental studies. Nevertheless, when self-reporting data are correctly utilized, they can help to provide a wider range of responses than many other data collection instruments.5 For example, self-reporting data can be valuable in obtaining subjects’ perspectives, views, and opinions.
There are a number of aspects of bias that accompany self-reported data and these should be taken into account during the early stages of the study, particularly when designing the self-reporting instrument. Bias can arise from social desirability, recall period, sampling approach, or selective recall. Here, two examples of self-reporting bias are discussed: social desirability and recall bias.
Social desirability bias
When researchers use a survey, questionnaire, or interview to collect data, in practice, the questions asked may concern private or sensitive topics, such as self-report of dietary intake, drug use, income, and violence. Thus, self-reporting data can be affected by an external bias caused by social desirability or approval, especially in cases where anonymity and confidentiality cannot be guaranteed at the time of data collection. For instance, when determining drug usage among a sample of individuals, the results could underestimate the exact usage. The bias in this case can be referred to as social desirability bias.
Overcoming social desirability bias
The main strategy to prevent social desirability bias is to validate the self-reporting instrument before implementing it for data collection.6–11 Such validation can be either internal or external. In internal validation, the responses collected from the self-reporting instrument are compared with other data collection methods, such as laboratory measurements. For example, urine, blood, and hair analysis are some of the most commonly used validation approaches for drug testing.12–14 However, when laboratory measurements are not available or it is not possible to analyze samples in a laboratory for reasons such as cost and time, external validation is often used. There are different methods, including medical record checks or reports from family or friends to examine externally the validity of the self-reporting instrument.12,15
Note that several factors must be accounted for in the design and planning of the validation studies, and in some cases, this can be very challenging. For example, the characteristics of the sample enrolled in the validation study should be carefully investigated. It is important to have a random selection of individuals so that results from the validation can be generalized to any group of participants. When the sampling approach is not random and subjective, the results from the validation study can only apply to the same group of individuals, and the differences between the results from validation studies and self-reporting instruments cannot be used to adjust for differences in any group of individuals.12,16 Hence, when choosing a predesigned and validated self-reporting instrument, information on the group of participants enrolled in the validation process should be obtained. This information should be provided as part of the research paper and if not, further communication is needed with the authors of the work in order to obtain them. For example, if the target of the study is to examine drug use among the general population with no specific background, then a self-reporting instrument that has been validated on a sample of the population having general characteristics should be used. In addition, combining more than one validation technique or the use of multiple data sources may increase the validity of the results.
Moreover, the possible effects of social desirability on study outcomes should be identified during the design phase of the data collection method. As such, measurement scales such as Marlowe–Crowne Social Desirability Scale17 or Martin–Larsen Approval Motivation score18 would be useful to identify and measure the social desirability aspect of the self-reported information.
Occasionally, study participants can erroneously provide responses that depend on his/her ability to recall past event. The bias in this case can be referred to as recall bias, as it is a result of recall error. This type of bias often occurs in case–control or retrospective cohort study designs, where participants are required to evaluate exposure variables retrospectively using a self-reporting method, such as self-administered questionnaires.19–21
While the problems posed by recall bias are no less than those caused by social desirability, recall bias is more common in epidemiologic and medical research. The effect of recall bias has been investigated extensively in the literature, with particular focus on survey methods for measuring dietary or food intake.22–25 If not given proper consideration, it can either underestimate or overestimate the true effect or association. For example, a recall error in a dietary survey may result in underestimates of the association between dietary intake and disease risk.24
Overcoming recall bias
To overcome recall bias, it is important to recognize cases where recall errors are more likely to occur. Recall bias was found to be related to a number of factors, including length of the recall period (ie, short or long times of clinical assessment), characteristics of the disease under investigation (eg, acute, chronic), patient/sample characteristics (eg, age, accessibility), and study design (eg, duration of study).26–30 For example, in a case–control study, cases are often more likely to recall exposure to risk factors than healthy controls. As such, true exposure might be underreported in healthy controls and overreported in the cases. The size of the difference between the observed rates of exposure to risk factors in cases and controls will consequently be inflated, and, in turn, the observed odds ratio would also increase.
Many solutions have proven to be useful for minimizing and, in some cases, eliminating recall bias. For example, to select the appropriate recall period, all the above-mentioned factors should be considered in relation to recall bias. Previous literature showed that a short recall period is preferable to a long one, particularly when asking participants about routine or frequent events. In addition, the recall period can be stratified according to participant demographics and the frequency of events they experienced. For example, when participants are expected to have a number of events to recall, they can be asked to describe a shorter period than those who would have fewer events to recall. Other methods to facilitate participant’s recall include the use of memory aids, diaries, and interviewing of participants prior to initiating the study.31
However, when it is not possible to eliminate recall errors, it is important to obtain information on the error characteristics and distribution. Such information can be obtained from previous or pilot studies and is useful when adjusting the subsequent analyses and choosing a suitable statistical approach for data analysis. It must be borne in mind that there are fundamental differences between statistical approaches to make adjustments that address different assumptions about the errors.22,32–36 When conducting a pilot study to examine error properties, a high level of accuracy and careful planning are needed, as validation largely depends on biological testing or laboratory measurements, which, besides being costly to conduct, are often subject to measurement errors. For example, in a validation study to estimate sodium intake using a 24-hour urinary excretion method, the estimated sodium intake tended to be lower than the true amount.25 Despite these potential shortcomings, the use of biological testing or laboratory measurements is one of the most credible approaches to validate self-reported data. More information on measurement errors is provided in the next section.
It is important to point out that overcoming recall bias can be difficult in practice. In particular, bias often accompanies results from case–control studies. Hence, case–control studies can be conducted in order to generate a research hypothesis, but not to evaluate prognoses or treatment effects. Finally, more research is needed to assess the impact of recall bias. Studies to evaluate the agreements between responses from self-reporting instruments and gold-standard data sources should be conducted. Such studies can provide medical researchers with information concerning the validity of the self-reporting instrument before utilizing it in a study or for a disease under investigation. Other demographic factors associated with recall bias can also be identified. For instance, a high agreement was found between self-reported questionnaires and medical record diagnoses of diseases such as diabetes, hypertension, myocardial infarction, and stroke but not for heart failure.37
Measurement error bias
Device inaccuracy, environmental conditions in the laboratory, or self-reported measurements are all sources of errors. If these errors occur, observed measurements will differ from the actual values, and this is often referred to as measurement error, instrumental error, measurement imprecision, or measurement bias. These errors are encountered in both observational (such as cohort studies) and experimental (such as laboratory tests) study designs. For example, in an observational study of cardiovascular disease, measurements of blood cholesterol levels (as a risk factor) often included errors.
An analysis that ignores the effect of measurement error on the results can be referred to as a naïve analysis.22 Results obtained from using naïve analysis can be potentially biased and misleading. Such results can include inconsistent (or biased) and/or inefficient estimators of regression parameters, which may yield poor inferences about confidence intervals and the hypothesis testing of parameters.22,34
However, random sampling should not be confused with measurement error variability. Commonly used statistical methods can address the sampling variability during data analysis, but they do not account for uncertainty due to measurement error.
Measurement error bias has rarely been discussed or adjusted for in the medical research literature, except in the field of forensic medicine, where forensic toxicologists have undoubtedly the most theoretical understanding of measurement bias as it is particularly relevant for their type of research.38 Known examples of measurement error bias have also been reported for blood alcohol content analyses.38,39
Systematic and random error
Errors could occur in a random or systematic manner. When errors are systematic, the observed measurements deviate from true values in a consistent manner, that is, they are either consistently higher or lower than the true values. For example, a device could be calibrated improperly and subtract a certain amount from each measurement. By not accounting for this deviation in the measurement, the results will contain systematic errors and in this case, true measurements would be underestimated.
For random errors, the deviation of the observed from true values is not consistent, causing errors to occur in an unpredictable manner. Such errors will follow a distribution, in the simplest case a gaussian (also called normal or bell-shaped) distribution, and will have a mean and standard deviation. When the mean is zero, the measured value should be reported within an interval around zero and an estimated amount of deviation from the actual value. When the target value is reported to fall within a range or interval of minimum and maximum levels, the size of the interval depends mainly on the size of measurement errors, that is, the larger the errors, the larger the uncertainty and hence the wider the intervals, which could affect the precision level.
Random errors could also be proportional to the measured amount. In this case, errors can be referred to as multiplicative or non-gaussian errors.36 These random errors occur due to uncontrollable and possibly unknown experimental factors, such as laboratory environment conditions that affect concentrations in biological experiments. Examples of non-gaussian errors can be found in breath alcohol measurements, in which the variability around the measurement increases with increasing alcohol concentrations.40–42
Adjusting for measurement error bias
The type and distribution of measurement errors determines the type of adjusting method.34 When errors are systematic, calibration methods can be used to reduce their effects on the results. These methods are based on a reference measurement that can be obtained from a previous or pilot study, and used as the correct quantity to calibrate the study measurements. As such, simple mathematical tools can be used if the errors are estimated. The adjustment methods for systematic errors are simpler to apply than those for random errors.
Significant efforts have been made to develop sophisticated statistical approaches that adjust for the effect of random measurement errors.34 Commonly available and popular statistical software packages, such as R Software Package (http://www.r-project.org) and the Stata (Stata Corporation, College Station, TX, USA) include features that allow adjustments to be made for random measurement errors. Some of the bias adjustment methods include simulation–extrapolation, regression calibration, and the instrumental variable approach.34 In order to select the best adjustment approach, knowledge of the error properties is essential. For example, the amount of standard deviation and the shape of error distribution should be identified through a previous or pilot study. Therefore, evaluation of the measuring technique is recommended to identify the error properties before starting the actual measuring procedure. Error properties should also be identified for survey measurement errors, in which methods for examining the reliability and validity of the survey can be used such as test–retest and record checks.
A simpler approach used by practitioners to minimize errors in epidemiologic studies is replication; in this method, replicates of the risk factor (eg, long-term average nutrients) are available and the mean of these values is calculated and used to present an approximate value relative to the actual value.43 These replicates can also be used to estimate the measurement error variance and apply an adjusted statistical approach.
Placing emphasis on one hypothesis because it does not contradict investigator beliefs is called confirmation bias, otherwise known as confirmatory, ascertainment, or observer bias. Confirmation bias is a type of psychological bias in which a decision is made according to the subject’s preconceptions, beliefs, or preferences. Such bias results from human errors, including imprecision and misconception. Confirmation bias can also emerge owing to overconfidence, which results in contradictory evidence being ignored or overlooked.44 In medicine, confirmation bias is one of the main reasons for diagnostic errors and may cause inaccurate diagnosis and improper treatment management.45–47
An understanding of how the results of a medical investigation are affected by confirmation bias is important. Many studies have demonstrated that any aspect of investigation that requires human judgment is subject to confirmation bias,48–50 which was also found to influence the inclusion and exclusion criteria of randomized controlled trial study designs.51 There are many examples of confirmation bias in the medical literature, some of which are even illustrated in DNA matching.16
Overcoming confirmation bias
Researchers have shown that not accounting for confirmation bias could affect the reliability of the investigation. Several studies in the literature also suggest a number of approaches for dealing with this type of bias. An approach that is often used is to conduct multiple and independent checks on study subjects across different laboratories or through consultation with other researchers who may have differing opinions. Through this approach, scientists can seek independent feedback and confirmation.52 The use of blinding or masking procedures, whether single- or double-blinded, is important for enhancing the reliability of scientific investigations. These approaches have proven to be very useful in clinical trials, as they protect final conclusions from confirmation bias. The blinding may involve participant, treating clinician, recruiter, and/or assessor.
In addition, researchers should be encouraged to evaluate evidence objectively, taking into account contradictory evidence, and alter perspectives through specific education and training programs,53,54 with no overcorrection or change in the researcher’s decision making.55
However, the problem with the above suggestions is that they become ineffective if specific factors of bias are not accounted for. For example, researchers could reach conclusions in haste due to external pressure to obtain results, which can be particularly true in highly sensitive clinical trials. Bias in such cases is a very sensitive issue, as it might affect the validity of the investigation. We can, however, avoid the possibility of such bias by developing and following well-designed study protocols.
Finally, in order to overcome confirmation bias and enhance the reliability of investigations, it is important to accept that bias is a part of investigations. Quantifying this inevitable bias and its potential sources must be part of well-developed conclusions.
Bias in epidemiologic and medical research is a major problem. Understanding the possible types of bias and how they affect research conclusions is important to ensure the validity of findings. This work discussed some of the most common types of information bias, namely self-reporting bias, measurement error bias, and confirmation bias. Approaches for overcoming bias through the use of adjustment methods were also presented. A summary of study types with common data collection methods, type of information bias and adjusting or preventing strategies is presented in Table 1. The framework described in this work provides epidemiologists and medical researchers with useful tools to manage information bias in their scientific investigations. The consequences of ignoring this bias on the validity of the results were also described.
Table 1 Type of study designs, common data collection methods, type of bias, and adjusting strategies
Bias is often not accounted for in practice. Even though a number of adjustment and prevention methods to mitigate bias are available, applying them can be rather challenging due to limited time and resources. For example, measurement error bias properties might be difficult to detect, particularly if there is a lack of information about the measuring instrument. Such information can be tedious to obtain as it requires the use of validation studies and, as mentioned before, these studies can be expensive and require careful planning and management. Although conducting the usual analysis and ignoring measurement error bias may be tempting, researchers should always follow the practice of reporting any evidence of bias in their results.
In order to minimize or eliminate bias, careful planning is needed in each step of the research design. For example, several rules and procedures should be followed when designing self-reporting instruments. Training of interviewers is important in minimizing such type of bias. On the other hand, the effect of measurement error can be difficult to eliminate since measuring devices and algorithms are often imperfect. A general rule is to revise the level of accuracy of the measuring instrument before utilizing it for data collection. Such adjustments should greatly reduce any possible defects. Finally, confirmation bias can be eliminated from the results if investigators take into account different factors that can affect human judgment.
Researchers should be familiar with sources of bias in their results, and additional effort is needed to minimize the possibility and effects of bias. Increasing the awareness of the possible shortcomings and pitfalls of decision making that can result in bias should begin at the medical undergraduate level and students should be provided with examples to demonstrate how bias can occur. Moreover, adjusting for bias or any deficiency in the analysis is necessary when bias cannot be avoided. Finally, when presenting the results of a medical research study, it is important to recognize and acknowledge any possible source of bias.
The author reports no conflicts of interest in this work.
Hennekens CH, Buring JE. Epidemiology in Medicine. Boston: Little, Brown, and Company; 1987.
Gerhard T. Bias: considerations for research practice. Am J Health Syst Pharm. 2008;65(22):2159–2168.
Pannucci CJ, Wilkins EG. Identifying and avoiding bias in research. Plast Reconstr Surg. 2010;126(2):619–625.
Choi BCK, Pak AWP. Bias, Overview. Chichester, UK: John Wiley and Sons, Ltd; 2005.
Zhu K, McKnight B, Stergachis A, Daling JR, Levine RS. Comparison of self-report data and medical records data: results from a case-control study on prostate cancer. Int J Epidemiol. 1999;28(3):409–417.
Magura S, Kang SY. Validity of self-reported drug use in high risk populations: a meta-analytical review. Subst Use Misuse. 1996;31(9):1131–1153.
Harrison L. The validity of self-reported drug use in survey research: an overview and critique of research methods. In: Harrison L, Hughes A, editors. The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates NIDA Research Monograph no 167. Rockville, MD; 1997:17–36.
Darke S. Self-report among injecting drug users: a review. Drug Alcohol Depend. 1998;51(3):253–263.
Brener ND, Billy JOG, Grady WR. Assessment of factors affecting the validity of self-reported health-risk behavior among adolescents: evidence from the scientific literature. J Adolesc Health. 2003;33(6):436–457.
Mills JF, Loza W, Kroner DG. Predictive validity despite social desirability: evidence for the robustness of self-report among offenders. Crim Behav Ment Health. 2006;13(2):140–150.
van de Mortel TF. Faking it: social desirability response bias in self-report research. Aust J Adv Nurs. 2008;25(4):40–48.
Harrison LD, Hughes A, National Institute on Drug Abuse, National Institutes of Health (U.S.). The Validity of Self-Reported Drug Use: Improving the Accuracy of Survey Estimates. Rockville, MD: U.S. Department of Health and Human Service, National Institutes of Health, National Institute on Drug Abuse, Division of Epidemiology and Prevention Research; 1997.
Schütz H, Gotta JC, Erdmann F, Risse M, Weiler G. Simultaneous screening and detection of drugs in small blood samples and bloodstains. Forensic Sci Int. 2002;126(3):191–196.
Ledgerwood DM, Goldberger BA, Risk NK, Lewis CE, Kato Price R. Comparison between self-report and hair analysis of illicit drug use in a community sample of middle-aged men. Addict Behav. 2008;33(9):1131–1139.
Stephens R. The truthfulness of addict respondents in research projects. Int J Addict. 1972;7(3):549–558.
Kassin SM, Dror IE, Kukucka J. The forensic confirmation bias: Problems, perspectives, and proposed solutions. J Appl Res Mem Cogn. 2013;2(1):42–52.
Crowne DP, Marlowe D. A new scale of social desirability independent of psychopathology. J Consult Psychol. 1960;24:349–354.
Paulhus DL. Measurement and control of response bias. In: Robinson JP, Shaver PR, Wrightsman LS, editors. Measures of Personality and Social Psychological Attitudes. San Diego, CA: Academic Press; 1991.
Holmberg L, Ohlander EM, Byers T, et al. A search for recall bias in a case-control study of diet and breast cancer. Int J Epidemiol. 1996;25(2):235–244.
Neugebauer R, Ng S. Differential recall as a source of bias in epidemiologic research. J Clin Epidemiol. 1990;43(12):1337–1341.
Kip KE, Cohen F, Cole SR, et al; Herpetic Eye Disease Study Group. Recall bias in a prospective cohort study of acute time-varying exposures: example from the herpetic eye disease study. J Clin Epidemiol. 2001;54(5):482–487.
Fuller WA. Measurement Error Models. New York: John Wiley and Sons, Inc; 1987.
Nusser SM, Fuller WA, Guenther PM. Estimating usual dietary intake distributions: adjusting for measurement error and nonnormality in 24-hour food intake data. In: Lyberg L, Biemer P, Collins M, De Leeuw E, Dippo C, Schwarz N, Trewin D, editors. Survey Measurement and Process Quality. Hoboken, NJ: John Wiley and Sons, Inc; 1997.
Paeratakul S, Popkin BM, Kohlmeier L, Hertz-Picciotto I, Guo X, Edwards LJ. Measurement error in dietary data: implications for the epidemiologic study of the diet-disease relationship. Eur J Clin Nutr. 1998;52(10):722–727.
Ribi CH, Zakotnik JM, Vertnik L, Vegnuti M, Cappuccio FP. Salt intake of the Slovene population assessed by 24 h urinary sodium excretion. Public Health Nutr. 2010;13(11):1803–1809.
Bryant HE, Visser N, Love EJ. Records, recall loss, and recall bias in pregnancy: a comparison of interview and medical records data of pregnant and postnatal women. Am J Public Health. 1989;79(1):78–80.
Feldman Y, Koren G, Mattice D, Shear H, Pellegrini E, MacLeod SM. Determinants of recall and recall bias in studying drug and chemical exposure in pregnancy. Teratology. 1989;40(1):37–45.
Coughlin SS. Recall bias in epidemiologic studies. J Clin Epidemiol. 1990;43(1):87–91.
Weinstock MA, Colditz GA, Willett WC, Stampfer MJ, Rosner B, Speizer FE. Recall (report) bias and reliability in the retrospective assessment of melanoma risk. Am J Epidemiol. 1991;133(3):240–245.
Paganini-Hill A, Chao A. Accuracy of recall of hip fracture, heart attack, and cancer: a comparison of postal survey data and medical records. Am J Epidemiol. 1993;138(2):101–106.
Biemer PP, Groves RM, Lyberg LE, Mathiowetz NA, Sudman S. Measurement Errors in Surveys. Hoboken, NJ: John Wiley and Sons, Inc; 1991.
Carroll RJ, Freedman LS, Kipnis V. Measurement error and dietary intake. Adv Exp Med Biol. 1998;445:139–145.
Thomson CA, Giuliano A, Rock CL, et al. Measuring dietary change in a diet intervention trial: comparing food frequency questionnaire and dietary recalls. Am J Epidemiol. 2003;157(8):754–762.
Carroll RJ, Ruppert D, Stefanski LA, Crainiceanu CM. Measurement Error in Nonlinear Models. 2nd ed. New York: Chapman and Hall; 2006.
Althubaiti A, Donev AN. Mixture experiments with mixing errors. J Stat Plan Infer. 2011;141(2):692–700.
Althubaiti A, Donev A. Non-Gaussian Berkson errors in bioassay. Stat Methods Med Res. 2016;25(1):430–445.
Okura Y, Urban LH, Mahoney DW, Jacobsen SJ, Rodeheffer RJ. Agreement between self-report questionnaires and medical record data was substantial for diabetes, hypertension, myocardial infarction and stroke but not for heart failure. J Clin Epidemiol. 2004;57(10):1096–1103.
Gullberg RG. Estimating the measurement uncertainty in forensic blood alcohol analysis. J Anal Toxicol. 2012;36(3):153–161.
Moroni R, Blomstedt P, Wilhelm L, Reinikainen T, Sippola E, Corander J. Statistical modelling of measurement errors in gas chromatographic analyses of blood alcohol content. Forensic Sci Int. 2010;202(1–3):71–74.
Dror IE, Charlton D. Why experts make errors. J Forensic Ident. 2006; 56(4):600–616.
Gullberg RG. Estimating the measurement uncertainty in forensic breath-alcohol analysis. Accred Qual Assur. 2006;11(11):562–568.
Dror I, Rosenthal R. Meta-analytically quantifying the reliability and biasability of forensic experts. J Forensic Sci. 2008;53(4):900–903.
Carroll RJ. Measurement error in epidemiologic studies. In: Armitage P, Colton T, editors. Encyclopedia of Biostatistics. New York: John Wiley and Sons; 1998:2491–2519.
Nickerson RS. Confirmation bias: A ubiquitous phenomenon in many guises. Rev Gen Psychol. 1998;2(2):175–220.
Berner ES, Graber ML. Overconfidence as a cause of diagnostic error in medicine. Am J Med. 2008;121(5 Suppl):S2–S23.
Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78(8):775–780.
Reid MC, Lachs MS, Feinstein AR. Use of methodological standards in diagnostic test research. JAMA. 1995;274(8):645–651.
Hill C, Memon A, McGeorge P. The role of confirmation bias in suspect interviews: A systematic evaluation. Legal Criminal Psych. 2010;13(2):357–371.
Butt L. The forensic confirmation bias: problems, perspectives, and proposed solutions – Commentary by a forensic examiner. J Appl Res Mem Cogn. 2013;2(1):59–60.
Nakhaeizadeh S, Dror IE, Morgan RM. Cognitive bias in forensic anthropology: Visual assessment of skeletal remains is susceptible to confirmation bias. Sci Justice. 2014;54(3):208–214.
Goodyear-Smith FA, van Driel ML, Arroll B, Del Mar C. Analysis of decisions made in meta-analyses of depression screening and the risk of confirmation bias: a case study. BMC Med Res Methodol. 2012;12:76.
Budowle B, Bottrell MC, Bunch SG. A perspective on errors, bias, and interpretation in the forensic sciences and direction for continuing advancement. J Forensic Sci. 2009;54(4):798–809.
Rassin E. Individual differences in the susceptibility to confirmation bias. Neth J Psychol. 2008;64(2):87–93.
Powell MB, Hughes-Scholes CH, Sharman SJ. Skill in interviewing reduces confirmation bias. J Investig Psych Offender Profil. 2012;9(2):126–134.
Dror IE, Busemeyer JR, Basola B. Decision making under time pressure: an independent test of sequential sampling models. Mem Cognit. 1999;27(4):713–725.
© 2016 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.