Back to Journals » Clinical Interventions in Aging » Volume 9

The Healthy Aging Brain Care (HABC) Monitor: validation of the Patient Self-Report Version of the clinical tool designed to measure and monitor cognitive, functional, and psychological health

Authors Monahan P, Alder C, Khan B, Stump T, Boustani M

Received 15 March 2014

Accepted for publication 2 July 2014

Published 5 December 2014 Volume 2014:9 Pages 2123—2132

DOI https://doi.org/10.2147/CIA.S64140

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3



Patrick O Monahan,1 Catherine A Alder,2–4 Babar A Khan,1–3 Timothy Stump,1 Malaz A Boustani1–4

1Indiana University School of Medicine, Indianapolis, IN, USA; 2Indiana University Center for Aging Research, Indianapolis, IN, USA; 3Regenstrief Institute Inc., Indianapolis, IN, USA; 4Eskenazi Health, Indianapolis, IN, USA

Background: Primary care providers need an inexpensive, simple, user-friendly, easily standardized, sensitive to change, and widely available multidomain instrument to measure the cognitive, functional, and psychological symptoms of patients suffering from multiple chronic conditions. We previously validated the Caregiver Report Version of the Healthy Aging Brain Care Monitor (HABC Monitor) for measuring and monitoring the severity of symptoms through caregiver reports. The purpose of this study was to assess the reliability and validity of the Patient Self-Report Version of the HABC Monitor (Self-Report HABC Monitor).
Design: Cross-sectional study.
Setting: Primary care clinics affiliated with a safety net urban health care system in Indianapolis, Indiana, USA.
Subjects: A total of 291 subjects aged ≥65 years with a mean age of 72.7 (standard deviation 6.2) years, 76% female, and 56% African Americans.
Analysis: Psychometric validity and reliability of the Self-Report HABC Monitor.
Results: Among 291 patients analyzed, the Self-Report HABC Monitor demonstrated excellent fit for the confirmatory factor analysis model (root mean square error of approximation =0.030, comparative fit index =0.974, weighted root mean square residual =0.837) and good internal consistency (0.78–0.92). Adequate convergent–divergent validity (differences between the Telephone Interview for Cognitive Status test-based cognitive function impairment versus nonimpairment groups) was demonstrated only when patients were removed from analysis if they had both cognitive function test impairment and suspiciously perfect self-report HABC Monitor cognitive floor scores of 0.
Conclusion: The Self-Report HABC Monitor demonstrates good reliability and validity as a clinically practical multidimensional tool for measuring symptoms. The tool can be used along with its caregiver version to provide useful feedback (via monitoring of symptoms) for modifying care plans. Determining the validity of HABC Monitor scores from patients who self-report a perfect cognitive score of 0 requires cognitive function test results (eg, Telephone Interview for Cognitive Status or Mini Mental State Examination) or Caregiver Report HABC Monitor scores or further clinical examination to rule out the possibility that the patient is denying or unaware of their cognitive symptoms.

Keywords: symptoms, monitor, validation, cognitive, psychological, functional

Introduction

Older adults attending primary care clinics have multiple chronic conditions that result in a spectrum of cognitive, functional, and psychological symptoms.13 These symptoms often reduce the quality of life and lead to high health care utilization.13 The current primary care system is not designed to manage the burden of the cognitive, functional, and psychological symptoms of multiple chronic conditions.1,4 However, randomized controlled trials completed in the last decade57 established the effectiveness of the collaborative care model in reducing the burden of cognitive, functional, and psychological symptoms in primary care. An essential component of this collaborative care model was the continuous monitoring of both the symptoms and the effectiveness of the individualized care protocols designed to manage these symptoms.46,8 In order to implement this model effectively, primary care providers needed a new clinical tool (similar to the blood pressure cuff used for the recognition and management of hypertension) – a practical, accurate, sensitive-to-change, multidomain instrument for measuring and tracking cognitive, functional, and psychological symptoms of patients with comorbid chronic conditions.

The Healthy Aging Brain Care (HABC) Monitor was developed in 2008 to address the need for such a tool.9 Two versions of the HABC Monitor were developed in parallel. The Caregiver Report Version relies on the observations and perceptions of the patient’s informal caregiver, while the Self-Report Version is utilized to collect information directly from the patient. Both versions of the tool include 27 items to measure three domains of the patient’s symptoms (cognitive, functional, and psychological). The Caregiver Report Version of the HABC Monitor is a reliable, valid, clinically practical, multidimensional tool for measuring and monitoring the symptom severity of patients through their caregiver reports.9 The objective of the present study is to assess the reliability and validity of the Self-Report Version utilizing a cohort of patients different from the prior validation study.

Methods

Instrument development

The development of the HABC Monitor was described in our earlier paper.9 Briefly, the instrument was developed by an interdisciplinary expert panel and was developed with a flexible template capable of accommodating paper, telephone, or web-based data entry. The developers intended the relative benefit of the domains to depend on the clinical objective. For example, the cognitive domain (especially from the Caregiver Report Version) should be most sensitive to facilitating diagnosis of mild cognitive impairment or dementia, and the psychological domain should be most responsive to therapy.9

Clinical setting and population

The present study uses data from a cross-sectional phone survey collected on two cohorts of primary care patients in Eskenazi Health, Indianapolis, IN, USA.8 Eskenazi Health is a safety net health care system primarily serving an urban racially and ethnically mixed population of vulnerable adults.4 Patients meeting the following criteria were eligible for the study: 1) age ≥65 years, 2) had at least one visit to primary care during the period from January 1, 2008 to April 1, 2011, and 3) had any International Classification of Diseases, Ninth Revision (ICD-9) code (using both inpatient and outpatient Regenstrief Medical Record Systems10 over the 3 years 2005–2008) indicating cognitive impairment or had received at least one prescription of a cholinesterase inhibitor or memantine or had any ICD-9 code indicating depression or had received at least one prescription of a selective serotonin reuptake inhibitor.

Subject recruitment and testing

During a quality improvement project to evaluate the implementation of the collaborative care model for patients suffering from cognitive or emotional problems, we contacted a random sample of patients meeting the inclusion criteria at two time points – prior to the implementation of the collaborative care model (2009) and 1 year after the implementation of such a model (2010). All of the primary care providers agreed to allow their patients to be contacted. Each patient was asked to complete the Self-Report Version of the HABC Monitor and the Telephone Interview for Cognitive Status (TICS) over the telephone. All procedures were approved by the institutional review board of the Indiana University-Purdue University campus in Indianapolis, IN, USA.

Assessment questionnaires

Demographic data regarding patients’ age, sex, and race were collected during the telephone survey.

HABC Monitor – Self-Report Version

The Self-Report Version includes 27 items covering cognitive, functional, and psychological symptoms. Each item has the same item response options consisting of four categories capturing the frequency of the target symptoms in that past 2 weeks. Table 1 displays all items of the Self-Report Version instrument. A public website also hosts our instrument (http://www.agingbraincare.org/tools/habc-monitor/).

Table 1 Item distributions, missing rates, confirmatory factor analysis (CFA), and item–total correlations
Notes: All items had a four-category response scale: 0= none at all (0–1 day), 1= several days (2–6 days), 2= more than half the days (7–11 days), 3= almost daily (12–14 days). % miss = percentage of participants missing the item.
Abbreviation: SD, standard deviation.

TICS

The 11-item TICS was administered to patients over the telephone. The TICS is a brief, standardized test of global cognitive function developed for use when in-person cognitive testing is impractical or inefficient.11 The tool was intended to measure the cognitive functions affected by dementia and delirium.12 The TICS items briefly assess orientation, concentration, memory, naming, comprehension, calculation, reasoning, judgment, and distal limb praxis.11,12 The total test score ranges from 0 to 41. A lower score represents greater cognitive impairment.

Scaling procedure

Each Self-Report Version scale score was computed by summing all items in the scale. Total score was a sum of all 27 items in the three symptom domains. Higher scores represent worse scores for individual symptom domains and for the overall scale. A person-specific and scale-specific mean of nonmissing items was substituted for missing items if ≤50% of the items on the particular scale were missing.

Statistical analysis

The data available for the present study allowed us to assess three pieces of validity and reliability evidence – confirmatory factor analysis (CFA) of the hypothesized three-factor model of patient symptoms, internal consistency, and convergent–divergent validity.

Data quality and descriptive analyses

Missing data rates were calculated for each item to ensure data completeness. Item variability was assessed by calculating the item frequency distributions, range, and standard deviations. Item and scale scores were examined for floor and ceiling effects (ie, clustering of participants at the best and worst possible perceptions, respectively).

Psychometric analyses

CFA was performed using Mplus software Version 5.21 (Muthén and Muthén, Los Angeles, CA, USA).13 All other analyses were performed with SAS Version 9.3 (SAS Institute Inc., Cary, NC, USA). The following criteria of good model fit were used: comparative fit index (CFI) >0.95,14 root mean square error of approximation (RMSEA) <0.06,14 and weighted root mean square residual (WRMR) <1.00.15 To determine whether fit of the confirmatory model could be improved by adding paths or cross-loadings, modification indices were inspected. The ordinal categorical items were modeled with nonlinear ordered logit link functions. Internal consistency reliability was estimated with coefficient alpha,16 with reliability of ≥0.70 considered as satisfactory for group comparison purposes.17 Convergent–divergent validity was assessed using analysis of variance and logistic regression-based receiver operator curve analysis to compare HABC Monitor scores between impaired and nonimpaired patient groups defined by the TICS cognitive function test scores.

Results

Demographics

Among 985 patients who met the eligibility criteria, 374 were unable to be contacted (mostly nonworking phone numbers, no answers, and a few patient deaths). Of 611 patients who were contacted, 291 agreed to participate in the study and completed the telephone survey. Compared with patients who could not be contacted (n=374), patients who participated in the survey (n=291) did not differ statistically in race (P=0.39) or age (P=0.11); participants were more likely (P=0.001) to be female (76% vs 64%). Compared with patients who refused (n=320), participants (n=291) did not differ statistically in race (P=0.71); however, participants were more likely to be female (76% vs 66%; P=0.002) and were statistically younger by a clinically small average difference of 1.1 year (mean 72.7 vs 73.8 years; P=0.03).

The mean age of patients who completed the survey was 72.7 (standard deviation 6.2) years. Females were in the majority, constituting 76% of the cohort. There were 162 (55.7%) African Americans, 116 (39.9%) Caucasians, and 13 (4.4%) patients who reported other races.

Data quality

All items reported by patients on the Self-Report Version exhibited the full range of response categories across the four item response options, as demonstrated in Table 1. The item responses were more heavily distributed among the 0 and 1 scores than the 2 and 3 scores. Missing item rates were very low and ranged from 0% to 3.8% (Table 1). Item responses exhibited enough variability to result in adequate scale score variability (eg, range), adequate reliability (as measured by Cronbach alpha), and excellent confirmatory dimensionality fit, as described next.

CFA

We performed a confirmatory test of the factor analytic model. Our hypotheses were that the items would load significantly and >0.40 on the prespecified factors and would demonstrate acceptable item–total correlations above a 0.30 threshold. The hypothesized CFA model was the final model determined from our previous caregiver-reported data set.9 We also hypothesized that the factors would be significantly correlated. The three-factor CFA model demonstrated excellent fit in the present self-reported data: RMSEA (0.030), CFI (0.974), and WRMR (0.837) (Table 2). All loadings for items on their designated factor were well above 0.40, ranging from 0.57 to 0.88 (Table 1). The three factors were significantly and substantially correlated (cognitive, functional, r=0.81; cognitive, psychological, r=0.87; functional, psychological, r=0.93).

Table 2 CFA fit statistics
Abbreviations: RMSEA, root mean square error of approximation; CI, confidence interval; CFI, comparative fit index; WRMR, weighted root mean square residual; CFA, confirmatory factor analysis.

All items were retained based on clinical relevance in addition to psychometrics. For example, the “falling or tripping” item had the highest item floor effect (92%, ie, 268/289; Table 1) and the lowest (but acceptable) item–total correlation (0.32), but this item is very important for patient outcomes and had a high factor loading (0.70) on the functional subscale, a subscale that had very good internal consistency (0.81). Modification indices indicated that the model could not be improved by adding paths from items to other factors.

We also hypothesized that a single factor model would fit the data well because of the comorbidity of symptoms and the conceptual and clinical relatedness of the three factors. The one-factor CFA model showed adequate fit (as expected, not quite as excellent a fit as the three-factor model) with thresholds being satisfied for all three fit indices, indicating that it is appropriate to report an HABC Monitor total score in addition to three subscale scores. We thank an anonymous reviewer for recommending that we compute the average variance extracted (AVE) for each of the three latent factors. The AVEs were smaller than the squared interfactor correlations (Table 3), suggesting that the three factors are not strongly distinct, which is also supported by the good fit of the one-factor model.

Table 3 HABC-M score features: internal–consistency reliability, score distributions, and interscore correlations
Notes: % floor is the percentage of patients who reported the lowest (best) possible score. % ceiling is the percentage of patients who reported the highest (worst) possible score. The AVE, which is on the diagonal in parentheses, is computed separately for each factor from the three-factor CFA model and is equal to the average of the squared loadings.
Abbreviations: HABC-M, Healthy Aging Brain Care Monitor; AVE, average variance extracted; SD, standard deviation; CFA, confirmatory factor analysis.

We thank the same reviewer for recommending that we assess the fit statistics of a post hoc two-factor model in which the functional and psychological domains were combined (their factor correlation was high, 0.93). We also computed fit statistics for the two other post hoc two-factor models. Although not quite as good a fit as the three-factor model, all two-factor models demonstrated good fit (Table 2). Because the three factors have somewhat distinct clinical relevance and actionability, and all loadings from the three-factor model were >0.40 on their assigned factors, the remaining psychometrics below were investigated for the three subscales in addition to the total score. Subscales were scored by summing the items according to the domain assignments in the hypothesized three-factor CFA model in Table 1 because modification indices could not improve upon the excellent fit of this model.

Reliability and scale score features

We found high internal consistency of the Self-Report Version scales (0.78–0.92, Table 3). The observed scale scores covered most of the possible score range. In addition, the scale scores demonstrated a sufficient dispersion of scores for the purpose of assessing and monitoring the symptoms’ severity. The largest floor effect (cognitive subscale) did not exceed 54%, and the resulting dispersion of subscale and total scores was satisfactorily indicated by the wide observed score range and the adequate standard deviation for the Self-Report Version subscales and total scores (Table 3).

The three patient symptom scales were moderately to highly correlated (0.60–0.76), indicating that the domains are only somewhat distinct (Table 3). For example, the correlation between functional and psychological scales (0.76) implies substantial shared variance between the two scales (58%) and yet also substantial variance unique to each scale (42%). Because the total score was also highly internally consistent, this suggests that the HABC Monitor can be reliably scored on both the total score and subscale scores. In summary, the Self-Report Version scales demonstrated adequate internal consistency and scale score features, including ample dispersion of scale scores and moderate to high correlations between patient symptom scales.

Convergent–divergent validity

We thank an anonymous reviewer for recommending that we investigate the floor effects for the HABC Monitor subscale, which we incorporated into the convergent–divergent validity analysis. We found that a substantial number of patients reported a floor (perfect) score of 0 on the HABC Monitor cognitive scale when they also demonstrated impairment on the TICS cognitive function test. Self-report validity does not require extremely high concordance with cognitive test performance because, in isolation, self-reports, performance-based test results, and the caregiver’s proxy assessments likely do not provide a comprehensive picture of the complex symptoms and needs of a person.18,19 Nevertheless, an extreme (perfect floor) self-report cognitive scale score of 0 in the presence of cognitive function test impairment suggests that some patients who are impaired on global cognitive functioning may deny or be unaware of their cognitive symptoms when they self-report perfect scores. Therefore, we performed the convergent–divergent construct validity comparisons both for the total sample and separately after removing patients who both reported a floor effect of 0 on the HABC Monitor cognitive scale and demonstrated cognitive function test impairment on the TICS. Convergent validity is supported by the extent to which the HABC Monitor scales separate the cognitive function impairment groups defined by the TICS test score. The functional and psychological scales were expected to differ between TICS groups but not by as much as the cognitive scale. Note that because TICS is a test of cognitive function, it would be inappropriate to remove patients who scored at the floor for the entire HABC Monitor total score for this exploratory analysis; instead, we removed only patients who scored at the floor for the HABC Monitor cognitive score, but only if they also demonstrated impairment on the TICS cognitive function test.

We defined impairment on the TICS cognitive function test in three different ways to thoroughly explore the floor issue in the context of convergent validity. One method is based on the TICS manual for defining impairment (TICS 0–32).12 The second method employs a cut point commonly used by TICS studies20 to define possible clinically significant impairment (TICS 0–30). The third method uses the crosswalk table in the TICS manual12 to translate TICS scores to estimated Mini Mental State Examination (MMSE) scores and then to apply a commonly used threshold for MMSE scores (MMSE 0–23; TICS 0–26).

In the total sample, none of the HABC Monitor scales (cognitive, functional, psychological, total score) significantly separated the TICS impaired versus nonimpaired groups, regardless of which threshold was used to define impairment (Table 4). However, when suspicious self-report cognitive score floor effects were removed, the HABC Monitor cognitive scale was significantly different between the TICS impairment groups, and the receiver operator curve area under the curve (AUC) values were between 0.82 and 0.85, depending on which TICS threshold was used to define cognitive impairment, demonstrating convergent validity (Table 4). The functional and psychological HABC Monitor scales were also significantly different between the TICS impairment groups when suspicious floor effects were removed, although, as expected, the AUC values were smaller (compared with the AUC for the HABC Monitor cognitive scale), ranging from 0.65 to 0.70, demonstrating divergent validity. In the total sample, the largest AUC for the HABC Monitor cognitive scale was only 0.57, and the AUC for the functional and psychological HABC Monitor domains was near 0.50 (Table 4). Thus, the Self-Report Version of the HABC Monitor appears to be valid for patients who have impairment on cognitive function tests only when those patients report HABC Monitor cognitive scores above the floor score of 0.

Table 4 Convergent–discriminant construct validity and investigation of self-report cognitive score floor effects
Notes: Part A impaired category (TICS 0–32) includes what the TICS manual describes as the ambiguous (26–32), mildly (21–25), and moderately or severely (0–20) impaired categories for the manual’s suggested qualitative interpretive ranges of TICS scores. Part B TICS studies have often used ≤30 as a cut point for possible clinically significant impairment (see citation in text). Part C commonly used MMSE categories were used after translating TICS to MMSE using the TICS manual crosswalk table. The nonimpaired category is defined by the MMSE category of normal (24+), which translates to TICS 27–41; the impaired category (MMSE 0–23; TICS 0–26) includes the MMSE mild impairment (MMSE 18–23; TICS 18–26), the MMSE moderate impairment (MMSE 10–17; TICS 7–17), and the MMSE severe impairment (MMSE 0–9; TICS 0–6) categories. The ROC analysis was derived from logistic regression. Differences between TICS cognitive function groups (impaired, nonimpaired) are expected to be greater for the HABC-M cognitive scale than for the HABC-M functional and psychological scales.
Abbreviations: HABC-M, Healthy Aging Brain Care Monitor; TICS, Telephone Interview for Cognitive Status; SD, standard deviation; ANOVA, analysis of variance; ROC, receiver operator curve; AUC, area under the curve; MMSE, Mini Mental State Examination.

Sensitivity analyses

We examined possible effects of race on results. We re-estimated Cronbach’s coefficient alpha in white versus nonwhite patients. Cronbach’s coefficient alpha differed very little by race subgroups. Alphas continued to be in the range of 0.78–0.92, as they were in the total sample.

Discussion

The Self-Report Version of the HABC Monitor demonstrates good reliability and validity as a clinically practical multidimensional tool for assessing and monitoring symptoms of older adults attending primary care clinics. The self-report tool performed similarly to the caregiver report tool with respect to CFA and internal consistency.9 From a parsimony perspective, the total score is appealing because the one-factor model fit the data well, the latent factors were not greatly distinct, and patients with multiple chronic conditions have symptoms that cluster together, for which treating one symptom or disorder often affects other symptoms or disorders. However, clinicians may also be interested in monitoring the three subscale scores because internal psychometric characteristics are only one piece of evidence for how to score a tool. One must also consider conceptual relevance, clinical actionability, and external validity information. For example, greater differences were observed between TICS cognitive impairment groups for the HABC Monitor cognitive scale than for the HABC Monitor functional and psychological scales.

An investigation of self-report cognition floor effects and convergent–divergent validity resulted in the following caveat regarding cognitive impairment. Both the Self-Report HABC Monitor and the Caregiver Report HABC Monitor are valid for assessing symptoms of patients who have not met thresholds for impairment on cognitive function tests. The self-report tool can be used along with the caregiver report tool to provide useful feedback (via monitoring of symptoms) for modifying care plans. However, for patients demonstrating impairment on cognitive function tests, HABC Monitor information can be trusted from the Caregiver Report Version, but can be trusted from the Self-Report Version only for patients who self-report HABC Monitor cognitive scores above the perfect floor score of 0. When cognitive function test scores (eg, TICS or MMSE) are not available, floor scores of 0 on the Self-Report Version of the HABC Monitor would require caregiver report information and/or further clinical examination to determine whether the patient is underreporting their cognitive symptoms. Among patients who self-report HABC Monitor cognitive scale scores >0, we observed moderate discrimination (according to the AUC) between self-report HABC Monitor scores and cognitive function test scores. More research is needed to determine whether substantial underreporting, denial, or unawareness of cognitive symptoms also occurs for patients who are impaired on cognitive function tests and who self-report HABC Monitor cognitive scale scores above but near 0 (ie, good but not perfect self-report scores such as 1 or 2).

Before the analyses, we had suspected that patient-reported items may not “hang together” (ie, internal consistency and factor analytic model fit) or exhibit as much variability compared with the caregiver-reported data. Indeed, item responses were slightly less variable, and item and scale floor effects were slightly greater for the self-report data compared with the caregiver data, even though the cognitive severity of the patient populations was similar in the two studies.9 However, the internal psychometric properties of the Self-Report Version were very good, including data quality such as item and scale score dispersion, reliability, and CFA model fit. For example, coefficient alpha was slightly higher for the caregiver-reported scales,9 but the patient-reported scales also demonstrated very good internal consistency, with the lowest alpha being 0.78. Both patient and caregiver reports demonstrated alpha of 0.92 for the total scale.9 Furthermore, the hypothesized CFA model showed even better fit in patient-reported data than the following good fit shown for the caregiver-reported data:9 RMSEA =0.059, CFI =0.929, and WRMR =1.055.

Limitations

We did not have longitudinal data available in this data set to estimate test–retest reliability of the Self-Report Version over brief periods of time or to test sensitivity to change over longer periods of time. The overall adequacy of the tool must be evaluated on both caregiver-reported and patient-reported psychometrics. For example, because the tool was designed primarily to monitor symptoms over time, a very important piece of validity evidence is sensitivity to change, especially for the psychological domain, which was expected to be the most sensitive to change. Our caregiver-reported study showed that all subscales, especially the psychological domain, were significantly sensitive to change with respect to a gold standard for change in neuropsychiatric symptoms (ie, the Neuropsychiatric Inventory).9 Therefore, a limitation of the present study is the lack of longitudinal data to assess sensitivity to change for self-reported responses.

We did not collect construct–convergent external validator scales to compare with the functional and psychological HABC Monitor domains. However, the TICS was a construct–convergent scale to compare with the HABC Monitor cognitive domain. Nor did we have clinical diagnoses for assessing diagnostic accuracy. For several reasons, we did not determine optimal cut points for sensitivity and specificity for the HABC Monitor cognitive scale for detecting TICS cognitive impairment. First, at least three sensible thresholds exist for dichotomizing the TICS total score into impaired and nonimpaired groups. Second, the TICS telephone interview provides an estimate of global cognitive function but is not intended to diagnose a specific disorder, which would require a more comprehensive cognitive assessment.12 Third, our results indicated that the Self-Report Version of the HABC Monitor appears to be valid only when floor scores of 0 have been removed for patients who demonstrate test-based cognitive impairment (eg, on the TICS or MMSE). Although the HABC Monitor was developed primarily for tracking symptoms over time, future research could develop cut points for optimal sensitivity and specificity compared with gold standard clinical diagnoses. This could be done for diagnoses of mild cognitive impairment or dementia, as well as diagnoses relevant to other domains, such as diagnoses of depression or anxiety, which are relevant to the HABC Monitor psychological domain.

For the race subgroup sensitivity analysis, sample sizes were not large enough to perform factor analysis or to compare TICS impaired versus nonimpaired groups. As noted in our earlier article on the Caregiver Report Version,9 validating the HABC Monitor in other settings is a reasonable next step for both the Self-Report and Caregiver Report Versions. With regard to other settings, we are in the process of using psychometric results from both caregiver-reported and patient-reported data sets to develop a brief (eg, ten items) version for the busy primary care setting. In addition, future research should assess the sensitivity to change of the Self-Report Version: eg, compared with reliable change scores of valid but lengthier instruments such as the Neuropsychiatric Inventory.21

Conclusion

The present report, as well as our earlier report, which included supporting evidence for sensitivity to change validity for the caregiver version of the tool,9 suggests that the Caregiver Report and Self-Report Versions of the HABC Monitor are reliable, valid, and useful tools for monitoring the cognitive, functional, and psychological symptoms of patients while delivering care to patients and their caregivers under the collaborative care model. However, determining the validity of HABC Monitor scores from patients who self-report a perfect cognitive score of 0 requires cognitive function test results (eg, TICS or MMSE) or Caregiver Report HABC Monitor scores or further clinical examination to rule out the possibility that the patient is denying or unaware of their cognitive symptoms.

Acknowledgments

Author MAB was the principal investigator of this study. The study was funded by a grant from the National Institute of Mental Health (R24MH080827) and a grant from the National Institute on Aging (R01AG043465-01A1). Work by the first author (POM) was funded in part by a grant from the National Institute on Aging (1R01AG043465-01A1). The HABC Monitor is a copyrighted instrument by Drs Boustani, Galvin, and Callahan and the Indiana University School of Medicine. The HABC Monitor and scoring rules are available at http://www.agingbraincare.org/tools/habc-monitor/.

Disclosure

The authors have no conflicts of interest to disclose.


References

1.

Boustani M, Callahan CM, Unverzagt FW, et al. Implementing a screening and diagnosis program for dementia in primary care. J Gen Intern Med. 2005;20:572–577.

2.

Schubert CC, Boustani M, Callahan CM, et al. Comorbidity profile of dementia patients in primary care: are they sicker? J Am Geriatr Soc. 2006;54:104–109.

3.

Sha MC, Callahan CM, Counsell SR, Westmoreland GR, Stump TE, Kroenke K. Physical symptoms as a predictor of health care use and mortality among older adults. Am J Med. 2005;118:301–306.

4.

Boustani MA, Sachs GA, Alder CA, et al. Implementing innovative models of dementia care: the Healthy Aging Brain Center. Aging Ment Health. 2011;15:13–22.

5.

Callahan CM, Boustani MA, Unverzagt FW, et al. Effectiveness of collaborative care for older adults with Alzheimer disease in primary care. JAMA. 2006;295:2148–2157.

6.

Vickrey BG, Mittman BS, Connor KI, et al. The effect of a disease management intervention on quality and outcomes of dementia care. Ann Intern Med. 2006;145:713–726.

7.

Counsell SR, Callahan CM, Clark DO, et al. Geriatric care management for low-income seniors: a randomized controlled trial. JAMA. 2007;298:2623–2633.

8.

Callahan CM, Boustani MA, Weiner M, et al. Implementing dementia care models in primary care settings: the Aging Brain Care Medical Home. Aging Ment Health. 2011;15:5–12.

9.

Monahan PO, Boustani MA, Alder C, et al. Practical clinical tool to monitor dementia symptoms: the HABC-Monitor. Clin Interv Aging. 2012;7:143–157.

10.

McDonald CJ, Overhage JM, Tierney WM, et al. The Regenstrief Medical Record System: a quarter century experience. Int J Med Inform. 1999;54:225–253.

11.

Brandt J, Spencer M, Folstein M. The Telephone Interview for Cognitive Status. Neuropsychiatry Neuropsychol Behav Neurol. 1988;1:111–117.

12.

Brandt J, Folstein MF. Telephone Interview for Cognitive Status (TICS). Professional Manual. Florida, USA: PAR; 2003.

13.

Muthén LK, Muthén BO. Mplus User’s Guide. 5th ed. Los Angeles, CA: Muthén & Muthén; 1998–2007.

14.

Hu L-T, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Modeling. 1999;6:1–55.

15.

Yu C-Y. Evaluating Cutoff Criteria of Model Fit Indices for Latent Variable Models with Binary and Continuous Outcomes [dissertation], University of California, Los Angeles; 2002: Available from: http://www.statmodel.com/download/Yudissertation.pdf. Accessed September 18, 2014.

16.

Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334.

17.

Nunnally JC, Bernstein IH. Psychometric Theory. 3rd ed. New York, NY: McGraw-Hill; 1994.

18.

Sloane PD, Zimmerman S, Williams CS, Reed PS, Gill KS, Preisser JS. Evaluating the quality of life of long-term care residents with dementia. Gerontologist. 2005;45(1):37–49.

19.

Zimmerman S, Sloane PD, Williams CS, et al. Dementia care and quality of life in assisted living and nursing homes. Gerontologist. 2005;45(1):133–146.

20.

Espeland MA, Rapp SR, Katula JA, et al. Telephone Interview for Cognitive Status (TICS) screening for clinical trials of physical activity and cognitive training: the Seniors Health and Activity Research Program Pilot (SHARP-P) study. Int J Geriatr Psychiatry. 2011;26:135–143.

21.

Cummings JL. The Neuropsychiatric Inventory: assessing psychopathology in dementia patients. Neurology. 1997;48:S10–S16.

Creative Commons License © 2014 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.