Back to Journals » Patient Related Outcome Measures » Volume 9

Data quality, floor and ceiling effects, and test–retest reliability of the Mild Cognitive Impairment Questionnaire

Authors Dean K, Walker Z, Jenkinson C 

Received 6 July 2017

Accepted for publication 11 October 2017

Published 15 January 2018 Volume 2018:9 Pages 43—47

DOI https://doi.org/10.2147/PROM.S145676

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Liana Bruce



Katherine Dean,1 Zuzana Walker,2 Crispin Jenkinson1

1Health Services Research Unit, Nuffield Department of Population Health, University of Oxford, Oxford, 2Division of Psychiatry, University College London, London, UK

Background: The Mild Cognitive Impairment Questionnaire (MCQ) is a 13-item measure that assesses health-related quality of life (HRQoL) in people with mild cognitive impairment (PWMCI); it has two domains assessing the emotional and practical effects.
Objective: The aim of this study was to assess the psychometric properties of the MCQ.
Design: This is a longitudinal questionnaire-based study.
Setting: The participants were recruited from the memory clinics and research databases in the South of England.
Subjects: A total of 299 people aged 50 years and older with a diagnosis of mild cognitive impairment confirmed within the preceding 12 months.
Methods: MCQs were distributed to patients in memory clinics and those listed on research databases. Participants who returned completed questionnaires were sent a second copy of the MCQ to return 2 weeks after receiving the first questionnaire.
Results: Five hundred and seven questionnaires were distributed; response rates were 68.2% initially and 89.2% for the second questionnaire. From the returned questionnaires, response rates for each item were high (>98%) and a full range of responses for each item was received with no evidence of significant floor or ceiling effects. Internal consistency reliability for both scale scores at both time points was good, with Cronbach’s α≥0.84 in all cases. Test–retest reliability was excellent for both domains with the intraclass correlation coefficients of 0.90 and 0.92 for the practical and emotional domains, respectively. Paired sample t-tests also confirmed the stability of scale score distributions over time.
Conclusion: The MCQ has robust psychometric properties, which make it suitable for assessing HRQoL in PWMCI, including comparison of group level data in intervention studies.

Keywords: mild cognitive impairment, health-related quality of life, psychometric, validation

 

Introduction

Mild cognitive impairment (MCI) is a common condition, with a prevalence of ~3% in the general older population,1 and the rates of diagnosis are increasing as a result of recent government policies encouraging the early diagnosis of dementia.2,3 MCI has been shown to be associated with significant emotional and practical challenges for those living with MCI.46 Despite this, until recently, no validated patient-reported outcome measure (PROM) for the assessment of health-related quality of life (HRQoL) existed for MCI. This is important for the following two reasons: first, assessment of HRQoL is an important part of the clinical assessment of patients in everyday practice, particularly given the evidence that the condition has an adverse effect on quality of life and second, there is growing interest in conducting trials of potentially disease-modifying treatments for dementia in patients with MCI who have a high rate of conversion to dementia, as a study population with a “pre-dementia” condition. It has been noted in the literature that, up to now, no PROMs developed specifically existed for MCI7,8 and that the lack of consistency in outcome measures used has made it difficult to compare and interpret the results of many of the interventional studies that have been carried out in this field.8 PROMs, defined by the UK Department of Health as “measures of a patient’s health status or health-related quality of life [...] typically short, self-completed questionnaires”9 are the ideal tool for use in this type of study, particularly as many of the assessments of cognition used currently are not sufficiently sensitive to measure the mild degree of cognitive impairment seen in MCI. In addition, both the European Medicines Agency10 and the US Food and Drug Administration11 have issued guidance for the pharmaceutical industry regarding the use of PROMs in medical product development.

In order to address this deficiency, the Mild Cognitive Impairment Questionnaire (MCQ) was developed in a previous study.12 The development process involved semistructured interviews with people with MCI (PWMCI) and their carers. The data from these interviews were analyzed using qualitative methodology, and a draft version of the MCQ was produced. This version was refined following discussion with focus groups (consisting of PWMCI and their carers), and the final version was administered to a large number of PWMCI.19 The results obtained were analyzed using factor analysis, which resulted in a 13-item measure including two scales measuring “emotional effects” and “practical concerns”. Analysis of the data in that study showed that the MCQ has good psychometric properties in terms of internal consistency reliability (as measured by Cronbach’s a) and validity.

The aim of this study was to further assess the psychometric properties of the MCQ, including test–retest reliability, to facilitate its use both in clinical practice and, as a potential outcome measure, in intervention studies.

Methods

The MCQ, together with some basic demographic questions, was distributed (in person) to people diagnosed with MCI in memory clinics and (by post) to people on research databases in Berkshire, Essex, Hertfordshire, and Oxfordshire (UK). This was designated as “Time Point 1”. People invited to take part in the study were aged 50 years or older with a diagnosis of MCI confirmed in a memory service (using whichever criteria the diagnosing clinician had applied) within the 12 months preceding recruitment. The memory clinics and research databases in Oxford and Essex from which the majority of participants were recruited, both for the study in which the MCQ was originally developed,12 and for this study, all used the well-known diagnostic criteria for MCI set out by Petersen et al.13 Participants were asked to complete and return the questionnaire within 2 weeks, and those recruited from research databases received one written reminder about the study if their questionnaire had not been received after 2 weeks. Time Point 1 recruitment was carried out over a 24-month period between May 2014 and May 2016.

The participants who returned questionnaires were sent a second copy of the MCQ to complete and return 2 weeks after receiving the first questionnaire (ie, Time Point 2).

MCQ data were analyzed using the following criteria:

  1. Data completeness, ie, rates of item-level missing data;
  2. Response distributions for each item including floor and ceiling effects;
  3. Features of scale score distributions at each time point;
  4. Internal consistency reliability of scale scores;
  5. Test–retest reliability of the scale scores between the two time points;
  6. Comparison of the scale score distributions between the two time points.

Consent and approvals

Approval of this study was granted by the North of Scotland Research Ethics Service, REC Reference 14/NS/0031.

All participants gave (or had already given) written informed consent to their details being stored and used for the purposes of research. Consent to participate in the study was implied by completion and return of the study questionnaires – this was clearly explained in the study information supplied to the participants.

Results

Response rates and demographics

Five hundred seven questionnaires were distributed at Time Point 1; of these, 346 (68.2%) questionnaires were returned. Three hundred forty-five questionnaires were distributed at Time Point 2; of these, 308 (89.2%) questionnaires were returned.

Forty-seven completed questionnaires were excluded as the participants did not meet the inclusion criteria (or there was insufficient information included in the demographic information to ensure that they did); therefore, 299 completed questionnaires were included in the analysis.

The characteristics of the participants included in the study are listed in Table 1.

Table 1 Characteristics of study participants

Data completeness and response distributions

Seventeen (5.7%) respondents did not complete all 13 items of the MCQ at Time Point 1, and 15 (4.9%) respondents did not complete all items at Time Point 2; therefore, it was not possible to calculate both dimension scores for these participants. However, as listed in Table 2, response rates to each individual question were high.

Table 2 Item completeness and response distribution for each MCQ item (Time Point 1)

Abbreviation: MCQ, Mild Cognitive Impairment Questionnaire.

The distribution of responses for each MCQ item is also given in Table 2. Response distributions, as might be expected for a “mild” condition, tended to be skewed toward better health, but a full range of responses to all items was observed and none had floor or ceiling effects >29%.

Scale score features for the two dimensions of the MCQ

Results from the administration of the MCQ at the two time points are listed in Table 3.

Table 3 Descriptive statistics, score distributions, and internal consistency reliability for the MCQ domains at each time point

Abbreviation: MCQ, Mild Cognitive Impairment Questionnaire.

Cronbach’s a coefficients14 were calculated to estimate the internal consistency reliability of each scale score, as listed in Table 3; a values >0.7 are recommended for group-level hypothesis testing,15 and a values >0.9 may mean that the measure may be appropriate for use at an individual level.16

Test–retest reliability

Intraclass correlation coefficients (type [3.1])17 were calculated for each of the scale scores between the two time points in order to evaluate test–retest reliability. For the practical scale, intraclass correlation coefficient is 0.90 (95% CI 0.87–0.92, P<0.001), and for the emotional scale, intraclass correlation coefficient is 0.89 (95% CI 0.86–0.92, P<0.001). This indicates excellent test-retest reliability for both domains.

Paired sample t-tests were performed to evaluate whether there was any change in the distribution of the scale scores between the two time points. These revealed no significant differences (practical: n=243, t=0.61, and P=0.54; emotional: n=249, t=0.025, and P=0.98).

Discussion and conclusion

Several measures were used for further assessment of the psychometric properties of the MCQ. First, data completeness was assessed; this was good for all 13 items indicating that there were no items to which a high proportion of subjects did not respond. This is important as high levels of nonresponse can indicate problems with an item such as subjects finding it difficult to understand or upsetting.

Second, response distributions for each item were analyzed and this showed that all response categories were used for all items with no significant floor or ceiling effects, indicating that the items tap a wide range of HRQoL effects. This is an important feature as it is known that MCI is a heterogeneous condition, and consequently, its effects can be very variable between individuals. Generally, a total of <40% respondents selecting “never” or “always” indicated that an item does not show significant “floor” or “ceiling” effects, respectively and it was hence reassuring that the vast majority of items in the MCQ had floor effects <30% and ceiling effects <15%. Third, features of scale score distributions, such as the spread of scores generated, were assessed at each time point, which again indicated that a wide range of effects are tapped by the scores.

Fourth, the internal consistency reliability of the scale scores was assessed using Cronbach’s a. High levels of internal consistency reliability provide greater confidence when using a measure to compare treatment groups, for example, in randomized intervention trials such as those using populations of PWMCI as study subjects in trials of potentially disease-modifying dementia treatments. As discussed in the “Introduction” section, such trials are becoming increasingly popular and there is currently a dearth of instruments appropriate for use as outcome measures. In all cases, the Cronbach’s a exceeded 0.7 indicating the measure’s usefulness in group-level hypothesis testing. This means that the measure could be used as a reliable tool to assess the effect of interventions in studies using PWMCI as subjects. For the emotional domain, the Cronbach’s a for the datasets at both time points was 0.9, which might support the use of this scale for assessment at the individual patient level in clinical practice, another important potential use of the MCQ. Ongoing research regarding sensitivity to change of the MCQ will help to establish whether it would be appropriate to use the MCQ for individual clinical analysis.

Fifth, test–retest reliability of the scale scores between the two time points was examined using intraclass correlation coefficients; these showed excellent reliability for both domains indicating that the two scales produce stable scores over time (assuming no clinical change). It is important that a measure used to assess change over time produces stable scores in the absence of clinical change so that any change in score that occurs can be confidently attributed to a change in clinical condition rather than measurement error. In this case, the high reliability of both domains within a time frame (2 weeks) where no significant clinical change would be expected in MCI suggests that the MCQ produces stable scores over time in the absence of clinical change. Finally, scale score distributions between the two time points were compared using t-tests, which also confirmed that scale score distributions remained stable over time.

It should also be noted that the response rates to the questionnaire were relatively high (~70% at the first time point and ~90% at the second), which suggests that the practicalities of completing the MCQ are not unduly challenging for PWMCI. These response rates are relatively good when compared with similar work: response rates for such surveys rarely exceed 70% and are often considerably lower.

A possible limitation of the study is the fact that there was potential for variability in the definition of MCI used when diagnosing participants. The decision to allow participants to be recruited having been diagnosed with MCI using “whichever criteria the diagnosing clinician had applied” was intentionally made so that participants recruited to the study would reflect “real world” clinic populations and ensure that results were applicable to them. Despite the fact that the inclusion criteria for the study allowed variable definitions to be used for diagnosis, the majority of participants (both in this study and the one in which the MCQ was originally developed12) were recruited from memory clinics and research databases, which applied the Petersen et al13 diagnostic criteria. As a result, the participants were most likely relatively homogeneous with respect to MCI diagnosis. In addition, although there was a wide range of age among the participants (50–95 years), the interquartile range was much narrower (70–82 years), which also suggests reasonable homogeneity among the subjects and correlates roughly with the age at which MCI diagnosed by the Petersen et al18 criteria has a peak prevalence.

The MCQ has been developed using robust qualitative and quantitative methodology as described previously.12 This article provides further evidence that the MCQ has good psychometric properties that make it suitable for use in assessing areas of HRQoL relevant to people living with MCI and comparing treatment groups in intervention trials. Further work, which is currently ongoing, will provide more evidence regarding sensitivity to change and, potentially, use of the measure at the individual patient level.

Acknowledgment

The authors would like to thank all the research staff who helped with recruitment to this study including those within North Essex Partnership NHS Foundation Trust, Oxford Health NHS Foundation Trust, and Hertfordshire Partnership University NHS Foundation Trust. The study was funded by a “British Geriatrics Society Start Up Grant”. The sponsors played no role in the design, execution, analysis, interpretation of data, and writing of the study.

Disclosure

The Mild Cognitive Impairment Questionnaire is under copyright by Oxford University Innovation Limited. KD, CJ, and ZW may stand to gain financially should it be used for commercial purposes. The authors report no other conflicts of interest in this work.

References

1.

Gauthier S, Ferris S. Outcome measures for probable vascular dementia and Alzheimer’s disease with cerebrovascular disease. Int J Clin Pract Suppl. 2001;120:29–39.

2.

Department of Health. Living Well with Dementia: A National Dementia Strategy. 2009. Available from: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/168220/dh_094051.pdf. Accessed January 4, 2018.

3.

Department of Health. Prime Minister’s Challenge on Dementia. 2012. Available from: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/215101/dh_133176.pdf. Accessed January 4, 2018.

4.

Joosten-Weyn Banningh L, Vernooij-Dassen M, Rikkert MO, Teunisse JP. Mild cognitive impairment: coping with an uncertain label. Int J Geriatr Psychiatry. 2008;23(2):148–154.

5.

Lu Y-FY, Haase JE, Farran CJ. Perspectives of persons with mild cognitive impairment: sense of being able. Alzheimers care today. 2007;8(1):75–86.

6.

McIlvane JM, Popa MA, Robinson B, Houseweart K, Haley WE. Perceptions of Illness, coping, and well-being in persons with mild cognitive impairment and their care partners. Alzheimer Dis Assoc Disord. 2008;22(3):284–292.

7.

Weiner MW, Veitch DP, Aisen PS, et al; Alzheimer’s Disease Neuroimaging Initiative. The Alzheimer’s disease neuroimaging initiative: a review of papers published since its inception. Alzheimers Dement. 2012; 8(1 suppl):S1–S68.

8.

Frank L, Lenderking WR, Howard K, Cantillon M. Patient self-report for evaluating mild cognitive impairment and prodromal Alzheimer’s disease. Alzheimers Res Ther. 2011;3(6):35.

9.

Department of Health. Guidance on the Routine Collection of Patient Reported Outcome Measures (PROMs). 2009. Available from: http://webarchive.nationalarchives.gov.uk/20130105081711/http://www.dh.gov.uk/prod_consum_dh/groups/dh_digitalassets/@dh/@en/documents/digitalasset/dh_092625.pdf. Accessed January 4, 2018.

10.

European Medicines Agency. Reflection Paper on the Regulatory Guidance for the Use of Health Related Quality of Life (HRQL) Measures in the Evaluation of Medicinal Products. 2005. Available from: https://www.ispor.org/workpaper/emea-hrql-guidance.pdf. Accessed January 4, 2018.

11.

U.S. Department of Health and Human Services FDA Center for Drug Evaluation and Research; U.S. Department of Health and Human Services FDA Center for Biologics Evaluation and Research; U.S. Department of Health and Human Services FDA Center for Devices and Radiological Health. Guidance for industry: patient-reported outcome measures: use in medical product development to support labeling claims: draft guidance. Health Qual Life Outcomes. 2006;4:79.

12.

Dean K, Jenkinson C, Wilcock G, Walker Z. The development and validation of a patient-reported quality of life measure for people with mild cognitive impairment. Int Psychogeriatr. 2014;26(3):487–497.

13.

Petersen RC, Smith GE, Waring SC, Ivnik RJ, Kokmen E, Tangelos EG. Aging, memory, and mild cognitive impairment. Int Psychogeriatr. 1997;9(suppl 1):65–69.

14.

Cronbach L. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16(3):297–334.

15.

Nunnally J, Bernstein I. Psychometric Theory. 3rd ed. New York: McGraw Hill; 1994.

16.

Ware JE, Kosinski M, Keller SD. SF-36 Physical and Mental Summary Scores, a User’s Manual. Boston: The Health Institute, New England Medical Centre; 1994.

17.

Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979;86(2):420–428.

18.

Petersen RC, Roberts RO, Knopman DS, et al. Prevalence of mild cognitive impairment is higher in men. Neurology. 2010;75(10):889–897.

19.

innovation.ox.ac.uk [webpage on the Internet]. The Mild Cognitive Impairment Questionnaire (MCQ). Oxford University Innovation; 2014. Available from: https://innovation.ox.ac.uk/outcome-measures/mild-cognitive-impairment-questionnaire-mcq/. Accessed January 4, 2018.

Creative Commons License © 2018 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.