Back to Journals » Patient Preference and Adherence » Volume 14

Measuring the Patient Experience of Mental Health Care: A Systematic and Critical Review of Patient-Reported Experience Measures

Authors Fernandes S , Fond G , Zendjidjian XY , Baumstarck K, Lançon C, Berna F , Schurhoff F, Aouizerate B, Henry C, Etain B , Samalin L, Leboyer M , Llorca PM , Coldefy M, Auquier P, Boyer L 

Received 26 March 2020

Accepted for publication 23 June 2020

Published 3 November 2020 Volume 2020:14 Pages 2147—2161

DOI https://doi.org/10.2147/PPA.S255264

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Johnny Chen



Sara Fernandes,1 Guillaume Fond,1 Xavier Yves Zendjidjian,1 Karine Baumstarck,1 Christophe Lançon,1 Fabrice Berna,2 Franck Schurhoff,2 Bruno Aouizerate,2 Chantal Henry,2 Bruno Etain,2 Ludovic Samalin,2 Marion Leboyer,2 Pierre-Michel Llorca,2 Magali Coldefy,3 Pascal Auquier,1 Laurent Boyer1 On behalf of the French PREMIUM Group

1Aix-Marseille University, School of Medicine - La Timone Medical Campus, EA 3279: CEReSS - Health Service Research and Quality of Life Center, Marseille, France; 2FondaMental Foundation, Créteil, France; 3Institute for Research and Information in Health Economics (IRDES), Paris, France

Correspondence: Sara Fernandes
Aix-Marseille University, School of Medicine - La Timone Medical Campus, EA 3279: CEReSS - Health Service Research and Quality of Life Center, Marseille, France
Tel +33-660185077
Email [email protected]

Background: There is growing concern about measuring patient experience with mental health care. There are currently numerous patient-reported experience measures (PREMs) available for mental health care, but there is little guidance for selecting the most suitable instruments. The objective of this systematic review was to provide an overview of the psychometric properties and the content of available PREMs.
Methods: A comprehensive review following the preferred reporting items for systematic reviews and meta-analysis (PRISMA) guidelines was conducted using the MEDLINE database with no date restrictions. The content of PREMs was analyzed using an inductive qualitative approach, and the methodological quality was assessed according to Pesudovs quality criteria.
Results: A total of 86 articles examining 75 PREMs and totaling 1932 items were included. Only four PREMs used statistical methods from item response theory (IRT). The 1932 items covered seven key mental health care domains: interpersonal relationships (22.6%), followed by respect and dignity (19.3%), access and care coordination (14.9%), drug therapy (14.1%), information (9.6%), psychological care (6.8%) and care environment (6.1%). Additionally, a few items focused on patient satisfaction (6.7%) rather than patient experience. No instrument covered the latent trait continuum of patient experience, as defined by the inductive qualitative approach, and the psychometric properties of the instruments were heterogeneous.
Conclusion: This work is a critical step in the creation of an item library to measure mental health care patient-reported experience that will be used in France to develop, validate, and standardize item banks and computerized adaptive testing (CAT) based on IRT. It will also provide internationally replicable measures that will allow direct comparisons of mental health care systems.
Trial Registration: NCT02491866.

Keywords: patient-reported experience measures, patient experience, patient satisfaction, health services research, schizophrenia, bipolar disorder, major depression, systematic review

Background

Providing high-quality care is a priority for all health systems worldwide; however, a recent report highlights that the quality of mental health care remains lower than that of other medical disciplines.1,2 The current care organization is not adequate to address mental disorders (eg, schizophrenia, bipolar disorder and major depression) that emerge as a major health disparity category.26 Patients with mental disorders have a marked decrease in life expectancy (eg, approximately 14 years on average for patients with schizophrenia).7 They are confronted with persistent gaps in access to and receipt of mental health care.4 In particular, they are faced with misdiagnosis, which can lead to inappropriate or delayed treatment and, consequently, poor health outcomes.8 The major challenges for mental health care include inadequate treatments and the underuse of guidelines,914 as well as health care variation among geographical regions,15 stigma and discrimination,1618 and poor adherence to treatment by patients.19 Quality measurement is fundamental for improving the quality of mental health care and identifying where changes are needed, and it requires appropriate measurement methods. It is currently established that patients’ experience is an important measure of health care quality,2022 and the use of patient-reported experience measures (PREMs) is recommended.23 PREMs report information on patients’ views of their experience while receiving care.24 They are most commonly in the form of questionnaires.25 Respondents are asked to provide detailed reports on what actually occurred during a specific care episode, rather than an evaluation of what occurred,26 to determine the extent to which care is patient-centered.27,28 There is evidence of an association between a more positive patient experience and improved health care outcomes.2831 Many PREMs in mental health have been developed in recent decades, but there is little guidance for selecting the most suitable instruments. To date, systematic reviews have focused on satisfaction instruments,32,33 which is a limited approach to patient experience, or on PREMs but in a non-exhaustive way.34

Given the growing number of PREMs and the need for using them in clinical settings, the objectives of this systematic review were to 1) identify all available PREMs designed to measure the mental health care experience of adult patients, 2) provide an overview of their content and psychometric properties, and 3) critically analyze the methodological quality of these instruments using a set of pre-established robust criteria.

Methods

Search Strategy

A comprehensive review of the published peer-reviewed literature was conducted using the MEDLINE bibliographic database, with no date restrictions. Our research was limited to articles written in English and articles reporting on the development and/or validation process of mental health care quality assessment instruments. The reference lists of the selected articles were screened to find additional instruments that were not identified in the initial literature search. In addition, studies describing translations or revisions were retrieved to check references to the original instrument development. Articles that only addressed the use of an instrument were excluded. The authors also used online resources to inform this review. The research strategy was conceptualized as a combination of the context of use (ie, mental health or psychiatry), what is being measured (patient experience or satisfaction) and the study design (development and/or validation process of an instrument). This search key used a compilation of MeSH terms and free-text words, using Boolean operators, as follows:

(“patient satisfaction” OR “consumer satisfaction” OR “client satisfaction” OR “patient experience” OR “patients experience” OR “patient experiences” OR “patients experiences” OR “patient reported experience” OR “patient reported experience measure” OR “PREM” OR “PREMs”) AND (“psychiatry” OR “psychiatry”[Mesh] OR “psych*” OR “mental” OR “Mental Health Services”[Mesh]) AND (“tool*” OR “instrument*” OR “score*” OR “scale*” OR “survey*” OR “questionnaire*” OR “measure*”) AND (“development” OR “validation” OR “psychometric” OR “psychometrics” OR “psychometrics”[Mesh]).

This review was performed in accordance with the preferred reporting items for systematic reviews and meta-analysis (PRISMA) guidelines.35

Study Selection

Eligibility Criteria

Articles had to meet the following eligibility criteria to be included in this review.

The inclusion criteria were as follows: (i) articles dealing with the process of development and/or validation of any instrument intended to be used and/or applicable in the context of mental health care; (ii) adult participants, regardless of their care setting; (iii) instruments designed to capture the experience of patients/service users; and (iv) study written in English. This means that any study describing, at least in part, the operationalization of the construct, item development, pretesting or psychometric analyses were included.

The exclusion criteria were as follows: (i) instruments specifically designed for the elderly or children and adolescents; (ii) changes or cultural adaptation of one already existing instrument; (iii) instruments not self-reported by patients; (iv) articles addressing an ad hoc instrument; and (v) instruments developed for specific care (ie, home care, nursing care, residential care, etc.); (vi) review articles, editorials, discussions and opinion papers, and conference proceedings; and (vii) articles written in a language other than English.

Selection of Studies

The articles identified by the search key were carefully reviewed by two independent authors (SF and LB). These articles were first screened according to their titles and abstracts, and those that did not meet the eligibility criteria were eliminated. The full text was retrieved and reviewed when the decision could not be made on the basis of the title or abstract or when the assessment was discordant between the two examiners. In the latter case, when a consensus could not be reached, a third author (GF) was consulted to reach an agreement. The reference lists of articles eligible for inclusion in this review were also screened.

Data Extraction

Data were extracted separately by two independent authors (SF and LB). Excel was used to collect all the relevant information from the included articles using a predefined data extraction form. The following data were extracted for each instrument: general data (author(s) and year of publication, name and abbreviation of the instrument, country and language of origin, study objective(s), characteristics and size of the sample, administration method), structure (number of items, number and labels of dimensions/factors, time frame, response scale), development characteristics (viewpoints and sources for item development) and some psychometric properties (reliability and construct validity).

Content Analysis of the Instruments

The content of the instruments included in this review was analyzed using an exploratory qualitative approach. In the absence of a recognized and validated theoretical framework,36,37 we used an inductive approach,38 which consists of developing a conceptual framework from the raw data. This method makes it possible to move from a set of specific data sets to more general categories of meaning without being driven by predetermined theoretical assumptions. To do this, all collected items were carefully examined and coded. Codes sharing a relationship of meaning have been iteratively grouped into a limited number of categories with distinct and meaningful content. Each category was then reviewed and named according to the characteristic words it covers. This approach enabled us to examine the relative weight of each dimension by taking into account that some items could be classified into different categories, eg “I received information about treatment options for my mental health problems”39 could fit in the “information” and “medication” dimensions. This strategy has allowed us to identify the dimensions most commonly covered by the range of instruments currently available in the mental health context.

Quality Assessment

The criteria used to assess the quality of the instruments are derived from the Quality Assessment Criteria framework developed by Pesudovs et al.40 Originally designed to perform a standardized assessment of the quality of the development process and the psychometric properties of patient-reported outcome measures (PROMs), Pesudovs’ criteria proved to be relevant for evaluating PREMs.41 These criteria are presented in Table 1. Each instrument was independently rated by two authors (SF and LB) as positive (⩗⩗), acceptable (⩗) or negative (X) against each criterion. When consensus could not be reached, a third author (GF) was consulted.

Table 1 Quality Criteria

Results

Study Selection

The literature search produced a total of 693 potentially relevant scientific articles (last access: August 6th, 2019), and 11 additional articles were identified by further sources, for a total of 704 articles. These articles were first sorted according to the relevance of their titles and abstracts, leading to the exclusion of 577 references that were not relevant. The full text of the remaining 127 articles was retrieved. Of these, 56 articles were excluded because they did not meet the inclusion criteria. Thus, following this first stage, 71 articles remained. The reference lists of these articles were then reviewed, and 15 additional articles were included. See Figure 1 for details on the literature selection process.

Figure 1 PRISMA flow chart.Note: Adapted from Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol. 62(10):1006–1012.35

The search yielded a total of 86 articles examining 75 instruments163942124 (see Tables S1-4 to view the characteristics of these instruments).

General Data

The instruments included in this review were published between 197975 and 2018.39 Most of these instruments were from the United States (n=23) and the United Kingdom (n=15), followed by Australia (n=6), Sweden (n=5), Canada (n=4), France (n=4), Germany (n=3), Norway (n=3), Italy (n=3), the Netherlands (n=2), Thailand (n=2), Iran (n=1), Ireland (n=1), Belgium (n=1) and Ethiopia (n=1). Furthermore, one instrument was used simultaneously in several countries, namely, the US, Japan and Italy.65

Sixty-one instruments were self-administered (81.3%), and 14 were designed to be administered during an interview (18.7%).43,44,49,52,60,65,68,82,89,93,94,97,108,112,113,117

Most of the scales specifically targeted mental health service users (89.3%), while 8 were generic and applicable for mental health care.4346,57,58,71,75,111,114,115 Of these 75 included instruments, 24 were designed for inpatient and residential settings (32.0%),50,51,56,59,64,68,71,74,7880,84,85,90,95,100,101,104106,108,118121,123 including two that were specific to the forensic setting79,104; one instrument was developed in two versions, including one version for civil inpatients and the other version for forensic setting.108 Thirty-four instruments were designated for community-based services (45.3%).43,46,49,53,54,58,60,63,65,73,76,77,81,83,86,87,89,91,94,96,97,99,105,107,109,111,113,116,117

Seventeen instruments have proven useful for both inpatients and outpatients (22.7%).16,39,42,47,48,52,66,67,69,72,75,87,98,102,103,110,114,115,122,124,125 Among them, some instruments were only validated in specific populations: three in patients with schizophrenia,85,98,102,124 one in bipolar patients,88 one in depressed patients63 and one in bipolar or psychotic patients.67

The time frame for administering the instrument was reported for 29 instruments (38.7%): twenty were designed for completion before leaving the hospital, one of which was delivered 6 to 7 days after admission,64 one at the end of a group therapy session (normally after the first week of admission),119 one 1 week before discharge,85 seven on the day of discharge,50,51,74,90,106,118,120 1 day before discharge,56 and nine unspecified,53,59,67,78,84,95,121 two of which were designed to be completed near the patient’s discharge, generally within 24 to 72 hours68 or 24 to 48 hours80 before leaving. Two instruments were designed to be administered after discharge,71,125 including one within 1 month of discharge.71 Another instrument was designed to be administered both before and after discharge.66 Among the instruments designed for use in outpatient or community services, one instrument was designed to be completed 3 months after psychotropic drug change,88 two were designed to be completed before leaving the clinic,46,107 one was designed to be completed at the end of the initial visit122 and one was designed to be completed at home.63 Additionally, one instrument was administered at different times depending on the agencies.103

Instruments’ Structure

The number of dimensions varied from 1 (Patient Evaluation of Care-5 (PEC-5), Client Satisfaction Questionnaire (CSQ-8), Mental Health Service Satisfaction Scale (MHSSS), Satisfaction Index – Mental Health (SI-MH), Patient Satisfaction with Psychotropic (PASAP), Consumer Evaluation of Mental Health Services (CEO-MHS), Reassurance Questionnaire (RQ))50,75,82,8789,111 to 11 (Survey of Health care Experiences of Patients (SHEP)).125 The number of dimensions was determined using statistical methods for 51 instruments. Among them, one instrument used a non-parametric Mokken analysis,69 while the others used exploratory or confirmatory factor analyses. Alternatively, 19 instruments established their dimensionality based on a conceptual framework drawn from the literature without using statistical methods to confirm their structure.16474851525966717886949599100102109117118122124125

The number of items ranged from 5 (PEC-5);50 Helping Alliance Scale (HAS)97 to 84 (Thai Psychiatric Satisfaction Scale (TPSS)).96 The mean and mode were 26.1 (SD=17.4) and 20, respectively. Twenty-six instruments (48.0%) presented a combination of positively and negatively worded items.42,43,4648,53,54,59,60,67,71,73,81,82,86,87,89,93,97,99,102,108,110,116,124 Most items had Likert-type scale, though the response options varied between the instruments: the majority had an odd number of response options (52.0%), among which 35 had a 5-point Likert scale, two had a 7-point Likert scale, and two had a 3-point Likert scale. Seventeen instruments had a balanced rating scale (22.7%), among which 15 had a 4-point Likert scale and two had a 6-point Likert scale. One instrument used a dichotomous format,65 and 17 had combined response modalities (22.7%),39,45,51,52,54,56,58,60,68,69,71,92,95,97,102,109,124,125 two of which used a visual analogue scale.69,97 One instrument did not provide information about the response scale used.74 In addition, some scales also offered open-ended questions to capture additional qualitative information.4648,59,64,66,76,77,84,86,95,122

Generation Process

Evidence of patient involvement varied between instruments. Some instruments were developed from a single perspective, while others used a combined approach (literature review and/or patients’ and/or professionals’ perspectives). Patients may have been involved in all phases of instrument development to ensure both content and face validity. In other instances, patients may have only taken part in the refinement process to ensure face validity of the scale. In this case, patients may have been asked to evaluate the understanding, relevance, clarity, acceptability and usefulness of the instrument in a pretest phase prior to larger-scale administration. Patients may also have been included in the item development process (through interviews or focus groups), but the instrument was not pretested in a subsequent phase. Fifty-six instruments have involved patients in some way (74.7%).16,39,4248,5055,5764,66,68,7174,7684,86,8890,93,94,96,98,99,102,105107,109,110,

112115,117,118,120123 The majority of the instruments were designed using a combined approach (54.7%),16,39,4352,54,55,6064,6674,7678,80,8284,86,90,9294,98,109,112,113,117,118,122 while 28 instruments were developed from a single perspective (37.3%): 16 were drawn from a literature review,56,75,85,87,88,96,101,101104,108,114116,119,123125 10 were designed from the patients’ perspective42,5759,79,81,89,99,105107,121 and 2 from the professional/expert or other perspectives.53,110 Six instruments (8.0%) did not report any information on the development process.65,91,95,97,111,120

Psychometric Properties

Psychometric properties were assessed and reported with varying levels of evidence. These findings were not available for 9 out of the 75 instruments (12%).51,52,65,78,86,95,97,116,118 Only four papers used statistical methods from item response theory (IRT),50,62,73,121 while the others used classical test theory (CTT). Reliability measured by internal consistency was documented for 61 instruments and was the most commonly used approach. Cronbach’s alpha coefficient was within the acceptable value range (0.70–0.90) for only 19 instruments.16,45,47,48,50,53,66,67,69,71,74,87,89,92,104,108,109,112,113,119 One instrument did not provide the values but indicated that all scales had reached the recommended value of 0.70.117 Of the 41 instruments that had a Cronbach’s alpha outside this interval (54.7%), 15 instruments had a total scale (or at least one domain) where the value failed to reach the recommended threshold of at least 0.70,43,44,46,54,56,60,63,64,68,72,79,93,100,101,107,111,125 and 31 instruments had at least one alpha value exceeding 0.90, which can indicate item redundancy.42,46,49,54,5763,73,75,7985,126,90,94,96,98,102,103,106,107,114,115,120122,127 Fourteen instruments did not assess this property (18.7%).39,51,52,65,7678,86,91,95,97,99,110,116,118 The range for all included instruments was 0.35 (SHEP)125 to 0.96 (PCQ-H, TPSS, VSSS-EU, QPC–IP)73,96,102,106 for the total scale scores or by dimension. In addition, stability over time was also examined using test–retest estimates for 20 instruments (26.7%).16,47,48,59,61,62,64,67,76,77,8183,87,96,99,102,110115,117,124,127 The questionnaires were administered a second time within a time interval ranging from 1 day to 2 weeks and this information was not available for five instruments (25%). The stability of results over time was globally acceptable for the majority of instruments (75%), while it was very good for 5 instruments (20%). Sixty-five instruments reported elements to support construct validity, but these data were often incomplete (86.7%). Indeed, among these articles, 51 investigated the structure of the instruments by using either exploratory or confirmatory factorial analysis39,4246,49,50,53,54,5658,6064,68,7277,7985,8791,93,96,98,103,104,106108,110115,119121,123 or a Mokken analysis69 (68.0%), and 37 tested inter-item, item-dimension, dimension-dimension and item-total correlations (49.3%).16,4246,49,50,53,54,6063,66,67,7175,8082,85,88,93,96,100,101,103,104,106,107,114,115,117,119,122,123 (also miscalled as concurrent in some cases) validity was assessed for 31 instruments,16,43,44,4749,5759,63,67,69,71,73,7577,79,80,85,

88,90,9294,103,104,108,109,112115,117,119,121,122 while only 6 reported some evidence of divergent validity.73,104,109,111,121,122 Among the latter, strong evidence was found for three instruments,73,104,109 while the others did not explore this property in relation to another established instrument.111,121,122 Moreover, one instrument provided conclusions that contradicted the theorized relationships.111 Some aspects of criterion-related validity were examined, and eight instruments reported elements of predictive validity (10.7%).45,72,74,82,90,109,111,119 Finally, a preliminary examination of the concept of responsiveness was only undertaken for three instruments (4.0%).87,88,111

Content of the Instruments

The inductive qualitative analysis of the 1932 items identified seven key domains that underlie the concept of quality of mental health care from the patient’s perspective. The most represented dimension was interpersonal relationships (22.6%), followed by respect and dignity (19.3%), access and care coordination (14.9%), drug therapy (14.1%), information (9.6%), psychological care (6.8%) and care environment (6.1%). Additionally, a few items focused on patient satisfaction (6.7%) rather than patient experience.

Discussion

This work provides for the first time a description and a critical analysis of all available PREMs for mental health care, regardless of care setting and conditions. The multitude of instruments identified in this review has shown that they differ in scope, content and psychometric robustness. This wide range of instruments is an obstacle when choosing the most appropriate assessment instrument, which has important implications for the accuracy of the quality of care measurement. Although it is recognized that the assessment of these psychometric properties is essential to support the performance of an assessment instrument,128,129 some of them are not systematically evaluated. Some instruments demonstrated a satisfactory development process and psychometric properties, while others did not meet the recommended criteria. Thus, our work provides strong evidence that professionals should choose PREMs that best suit their needs. Beyond this help in the choice of PREMS, our work leads us to frame our discussion around the distinction between two broad categories of measures of patient-centered care: patient experience and patient satisfaction. The instruments selected in our study combine these two related but distinct concepts.22,127,130,131 Patient satisfaction is commonly used by health care facilities as a measure of the quality of care from the patients’ perspective.132135 However, patient satisfaction has been the subject of much controversy due to a tendency to obtain satisfaction rates with significant ceiling effects,136 thereby questioning the validity of the results.136139 This tendency is partly related to the design of satisfaction surveys, which are based on respondents’ expectations and subjective perceptions.127,132. Hence, two patients who receive the same care but who have different expectations may not express the same degree of satisfaction. On the other hand, a patient who expresses high satisfaction with this care may not be representative of an optimal care experience,22,136 and conversely, some patients may express dissatisfaction that may reflect inappropriate or clinically unfeasible expectations rather than suboptimal care.22,140. Patient experience is now recognized as the preferred approach for measuring the quality of care and services and has been increasingly adopted by many countries.141144 This measure overcomes the bias of satisfaction surveys by reintroducing the objective component into its evaluation.145 To do this, the questions are based on a detailed report that covers all aspects of the patient’s experience to reflect their actual care experience. In this sense, they provide more accurate and relevant information for monitoring and improving health services and care.22 However, there is considerable misunderstanding about what these two concepts refer to, and researchers tend to use them interchangeably.130,133 Our findings illustrate the difficulty of distinguishing satisfaction and experience measures among available instruments.33,146 First, when the initial literature search was conducted without including satisfaction terms, a limited number of results were identified (n=103), of which only nine met the eligibility criteria.39,53,58,62,73,81,106,113,123 In the absence of an adequate MeSH thesaurus, most of the patient experience instruments are indexed with the keyword “patient satisfaction”.39,62,73,123 The inclusion of a “patient experience” thesaurus would support research and the use of PREMs in practice. Second, no distinction between PREMs and satisfaction measures was made because this classification may not be obvious. Indeed, while experience measure refers to the objective experience of patients, by asking patients to provide a detailed report on specific aspects of care (eg, “I received information about treatment options for my mental health problems”),39 the satisfaction measure is a subjective assessment against patients’ expectations (eg, “Do you consider that your treatment has been adjusted to your situation?”).123 How questions are framed determines the degree of subjectivity of measures,147 and most instruments combine both types of questions.

The wide range of instruments identified by the review suggests the value of developing item banks and computerized adaptive testing (CAT) covering all aspects relevant to psychiatric patients to allow comparison across multiple conditions and settings of care at a national and international level.148,149 These modern methods make it possible to optimize measurement precision and flexibility compared to standard questionnaires where all respondents answer the same items, regardless of their characteristics. These item banks, from which the CAT selects the items to be administered, will cover all of the dimensions underlying the concept of quality of mental health care. First, the inductive qualitative approach identified seven key dimensions to measure mental health care patient-reported experience (also called latent trait). Some dimensions are common concerns for general patients, while others are more specific to psychiatric patients. In particular, interpersonal relationships are a major focus covered by the majority of instruments. Interpersonal relationships aim to establish a climate favorable for successful health care delivery, thereby contributing to improved patient satisfaction, treatment compliance and, consequently, health care outcomes.150,151 This dimension has been extended to all social relationships that can influence the subjective perception of the patient’s quality of care by integrating relationships with other patients152 as well as involvement of family and relatives in care.153 Furthermore, the development of patient-reported measures should involve patients to ensure that the instruments reflect what truly matters to them.42,121,154 In our review, most of the instruments were developed with patient involvement; however, this was not a primary concern, and only a handful of instruments used qualitative approaches (such as qualitative interviews or focus groups) to obtain patients’ perspectives. In addition, no instrument covers the latent trait continuum (ie, underlying the multidimensional concept of quality of care), which poses the problem of measuring patient experience based on current instruments and suggests the relevance of creating an item bank. Second, the psychometric qualities of the included instruments were heterogeneous. Only four papers used statistical methods from IRT50,62,73,121 as a supplement to CTT. However, IRT was used only to assess unidimensionality or to help in the selection of optimal test items to shorten the instrument and enhance its clinical utility. Most of the instruments included in this review have documented at least one psychometric property, and only 12% reported none. The main properties assessed were construct validity and reliability, mainly quantified in terms of internal consistency. For most of the instruments that addressed construct validity, it was often incomplete and relied primarily on factor analysis. However, this method alone is not enough to support construct validity. The psychometric robustness of an instrument must be based on a thorough assessment of all psychometric properties. The majority used exploratory or confirmatory factor analyses to assess the underlying structure of their instruments. Other instruments have used item-item, item-dimension, dimension-dimension and/or item-total correlations. Convergent validity (also miscalled concurrent validity in some cases) received special attention in slightly less than of the instruments, unlike divergent validity. The size of the samples was variable, which may raise questions about the relevance of some validity estimates that may require large samples. In addition, precautions should be taken regarding generalization when the instrument has been tested in a sample with particular characteristics. Reliability was assessed in two ways. The majority of instruments reported good internal consistency, but excessively high values (>0.90) may suggest redundancies.126 Test–retest reliability was not a major objective, as only 20 instruments reported this property. Finally, only three instruments were concerned by the concept of responsiveness.87,88,111 However, this concept is particularly important for practice and research because it makes it possible to detect a change in a patient’s state of health.65 Taken together, these elements indicate that there are a large number of instruments that have been psychometrically validated with varying evidence. Our work may thus be considered a first step in the creation of an item library to comprehensively and validly measure mental health care patient-reported experience that will be used in France to develop, validate, and standardize item banks and CATs based on IRT.23 It will also provide internationally replicable measures that will allow direct comparisons of mental health care systems. The interest of item banks and CAT is mainly to propose tailored individual assessment without loss of scale precision or content validity.149

Strengths and Limitations of This Review

First, we used a standardized methodology and robust quality criteria to evaluate the performance of currently available mental health assessment instruments. The Pesudovs framework was used because its simplified scoring system allows for a rigorous evaluation with more flexibility40 than other methods, such as the COSMIN checklist, which is based on the “worst score” principle. In addition, an adapted version of the Pesudovs framework for the evaluation of PREMs has been developed and used several times in other recent systematic reviews.41 To our knowledge, this is the first review to identify and evaluate instruments designed to measure the quality of mental health care from the patients’ perspective for a range of conditions and in multiple care settings. However, the completeness of the review may be questionable. We conducted our research from a single database due to limited access to other bibliographic databases. Nevertheless, MEDLINE may be considered to be the reference database in the health field. Second, we limited our searches to the English language. This language restriction was applied to obtain a homogeneous pool of items and to limit the costs associated with translation. However, we argue that our research is comprehensive because it was conducted without date limitations and identified instruments from 16 countries. In addition, the reference lists of articles included in the review were carefully reviewed, and additional relevant references could be retrieved. Third, the search key used may be questionable. Patient experience is a relatively recent term for which there is no commonly accepted definition and no appropriate MeSH thesaurus. When the terms used in the research combination were limited to “patient experience” and its derivatives, the number of results was small. We have therefore included terms related to patient satisfaction to broaden the scope of the results. Furthermore, the concept of quality of care is multidimensional, and the use of the indexed MeSH thesaurus (ie, “quality of health care”) has not made it possible to identify as many instruments as using a more general reading key. Despite these findings, the large number of instruments identified by the review supports the comprehensiveness of this work. Fourth, the assessment of the quality of the development process and psychometric properties depends on the quality and accuracy of publications. Some instruments may not have been properly evaluated due to insufficient reporting or inability to access some documents. Finally, the content analysis of the instruments was based on a 7-dimensional categorization derived from the data of the inductive qualitative analysis. Despite the rigorous methodology used, this categorization may be questionable. Nevertheless, these results are consistent with the dimensions commonly found in the documentation.

Conclusion

This work provides a description and a critical analysis of the available PREMs for mental health care that can help professionals choose PREMs that best suit their needs. This is a critical step in the creation of an item library to measure mental health care patient-reported experience that could be used in France and Europe to develop, validate, and standardize item banks and CAT using innovative technologies based on IRT.

Data Sharing Statement

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Disclosure

Bruno Aouizerate reports personal fees from Janssen-Cilag, Lilly, Sanofi, and Lundbeck, outside the submitted work. Pierre-Michel Llorca reports personal fees from lundbeck, otsuka, Sanofi, Recordatti, and Gedeon Richter, and personal fees and non-financial support from Janssen, outside the submitted work. The authors report no other possible conflicts of interest in this work.

References

1. Patel V, Saxena S, Lund C, et al. The Lancet Commission on global mental health and sustainable development. Lancet Lond Engl. 2018;392(10157):1553‑98.

2. Institute of Medicine. Improving the Quality of Health Care for Mental and Substance-Use Conditions. Washington, DC: National Academies Press; 2006.

3. Lake J, Turner MS. Urgent need for improved mental health care and a more collaborative model of care. Perm J. 2017;21.

4. Kilbourne AM, Beck K, Spaeth-Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30‑8. doi:10.1002/wps.20482

5. Fond G, Salas S, Pauly V, et al. End-of-life care among patients with schizophrenia and cancer: a population-based cohort study from the French national hospital database. Lancet Public Health. 2019;4(11):e583‑91. doi:10.1016/S2468-2667(19)30187-2

6. Fond G, Baumstarck K, Auquier P, et al. Recurrent major depressive disorder’s impact on end-of-life care of cancer: a nationwide study. J Affect Disord. 2020;263:326‑35. doi:10.1016/j.jad.2019.12.003

7. Piotrowski P, Gondek TM, Królicka-Deręgowska A, Misiak B, Adamowski T, Kiejna A. Causes of mortality in schizophrenia: an updated review of European studies. Psychiatr Danub. 2017;29(2):108‑20.

8. Viron MJ, Stern TA. The impact of serious mental illness on health and healthcare. Psychosomatics. 2010;51(6):458‑65. doi:10.1016/S0033-3182(10)70737-4

9. Kessler RC, Berglund PA, Bruce ML, et al. The prevalence and correlates of untreated serious mental illness. Health Serv Res. 2001;36(6 Pt 1):987‑1007.

10. Packness A, Halling A, Simonsen E, Waldorff FB, Hastrup LH. Are perceived barriers to accessing mental healthcare associated with socioeconomic position among individuals with symptoms of depression? Questionnaire-results from the Lolland-Falster Health Study, a rural Danish population study. BMJ Open. 2019;9(3):e023844. doi:10.1136/bmjopen-2018-023844

11. Wang PS, Demler O, Kessler RC. Adequacy of treatment for serious mental illness in the United States. Am J Public Health. 2002;92(1):92‑8. doi:10.2105/AJPH.92.1.92

12. Drapalski AL, Milford J, Goldberg RW, Brown CH, Dixon LB. Perceived barriers to medical care and mental health care among veterans with serious mental illness. Psychiatr Serv Wash DC. 2008;59(8):921‑4.

13. Kohn R, Saxena S, Levav I, Saraceno B. The treatment gap in mental health care. Bull World Health Organ. 2004;82(11):858‑66.

14. Tran LD, Ponce NA. Who gets needed mental health care? Use of mental health services among adults with mental health need in California. Californian J Health Promot. 2017;15(1):36‑45.

15. Coldefy M, Le Neindre C Les disparités territoriales d’offre et d’organisation des soins en psychiatrie en France: d’une vision segmentée à une approche systémique. Report n°558. Paris: Institut de Recherche et Documentation en Economie de la Santé (IRDES); 2014.

16. Clement S, Brohan E, Jeffery D, Henderson C, Hatch SL, Thornicroft G. Development and psychometric properties the Barriers to Access to Care Evaluation scale (BACE) related to people with mental ill health. BMC Psychiatry. 2012;12:36. doi:10.1186/1471-244X-12-36

17. Knaak S, Mantler E, Szeto A. Mental illness-related stigma in healthcare: barriers to access and care and evidence-based solutions. Healthc Manage Forum. 2017;30(2):111‑6. doi:10.1177/0840470416679413

18. Thornicroft G. Stigma and discrimination limit access to mental health care. Epidemiol Psichiatr Soc. 2008;17(1):14‑9. doi:10.1017/S1121189X00002621

19. Fond G, Boyer L, Boucekine M, et al. Validation study of the medication adherence rating scale. Results from the FACE-SZ national dataset. Schizophr Res. 2017;182:84‑9. doi:10.1016/j.schres.2016.10.023

20. Wang DE, Tsugawa Y, Figueroa JF, Jha AK. Association between the centers for medicare and medicaid services hospital star rating and patient outcomes. JAMA Intern Med. 2016;176(6):848‑50. doi:10.1001/jamainternmed.2016.0784

21. Trzeciak S, Gaughan JP, Bosire J, Mazzarelli AJ. Association between medicare summary star ratings for patient experience and clinical outcomes in US hospitals. J Patient Exp. 2016;3(1):6‑9. doi:10.1177/2374373516667002

22. Jenkinson C, Coulter A, Bruster S, Richards N, Chandola T. Patients’ experiences and satisfaction with health care: results of a questionnaire study of specific aspects of care. Qual Saf Health Care. 2002;11(4):335‑9. doi:10.1136/qhc.11.4.335

23. Fernandes S, Fond G, Zendjidjian X, et al. The patient-reported experience measure for improving qUality of care in Mental health (PREMIUM) project in France: study protocol for the development and implementation strategy. Patient Prefer Adherence. 2019;13:165‑77. doi:10.2147/PPA.S172100

24. Kingsley C, Patel S. Patient-reported outcome measures and patient-reported experience measures. BJA Educ. 2017;17(4):137‑44. doi:10.1093/bjaed/mkw060

25. Gleeson H, Calderon A, Swami V, Deighton J, Wolpert M, Edbrooke-Childs J. Systematic review of approaches to using patient experience data for quality improvement in healthcare settings. BMJ Open. 2016;6(8):e011907. doi:10.1136/bmjopen-2016-011907

26. Coulter A, Fitzpatrick R, Cornwell J. The Point of Care Measures of Patients’ Experience in Hospital: Purpose, Methods and Uses. London, UK: The King’s Fund; 2009.

27. Christalle E, Zeh S, Hahlweg P, Kriston L, Härter M, Scholl I. Assessment of patient centredness through patient-reported experience measures (ASPIRED): protocol of a mixed-methods study. BMJ Open. 2018;8(10):e025896. doi:10.1136/bmjopen-2018-025896

28. Anhang Price R, Elliott MN, Zaslavsky AM, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev MCRR. 2014;71(5):522‑54. doi:10.1177/1077558714541480

29. Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):e001570. doi:10.1136/bmjopen-2012-001570

30. Manary MP, Boulding W, Staelin R, Glickman SW. The patient experience and health outcomes. N Engl J Med. 2013;368(3):201‑3. doi:10.1056/NEJMp1211775

31. Loh A, Leonhart R, Wills CE, Simon D, Härter M. The impact of patient participation on adherence and clinical outcome in primary care of depression. Patient Educ Couns. 2007;65(1):69‑78. doi:10.1016/j.pec.2006.05.007

32. Boyer L, Baumstarck-Barrau K, Cano N, et al. Assessment of psychiatric inpatient satisfaction: a systematic review of self-reported instruments. Eur Psychiatry J Assoc Eur Psychiatr. 2009;24(8):540‑9.

33. Miglietta E, Belessiotis-Richards C, Ruggeri M, Priebe S. Scales for assessing patient satisfaction with mental health care: a systematic review. J Psychiatr Res. 2018;100:33‑46. doi:10.1016/j.jpsychires.2018.02.014

34. Sanchez-Balcells S, Callarisa Roca M, Rodriguez-Zunino N, Puig-Llobet M, Lluch-Canut M-T, Roldan-Merino JF. Psychometric properties of instruments measuring quality and satisfaction in mental health: a systematic review. J Adv Nurs. 2018;74(11):2497‑510. doi:10.1111/jan.13813

35. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. J Clin Epidemiol. 2009;62(10):1006–1012. doi:10.1016/j.jclinepi.2009.06.005

36. Gerteis M, Edgman-Levitan S, Daley J, Delbanco TL. Through the Patient’s Eyes: Understanding and Promoting Patient-Centered Care. 1st ed. San Francisco, CA: Jossey-Bass; 1993.

37. Valentine NB, de Silva A, Kawabata K, Darby C, Murray CJL, Evans DB. Health system responsiveness: concepts, domains and operationalization. In: Murray CJL, Evans DB, editors. Health Systems Performance Assessment: Debates, Methods, and Empiricism. Geneva, Switzerland: World Health Organization; 2003:573–596.

38. Elo S, Kyngäs H. The qualitative content analysis process. J Adv Nurs. 2008;62(1):107‑15. doi:10.1111/j.1365-2648.2007.04569.x

39. Bruyneel L, Van Houdt S, Coeckelberghs E, et al. Patient experiences with care across various types of mental health care: questionnaire development, measurement invariance, and patients’ reports. Int J Methods Psychiatr Res. 2018;27:1. doi:10.1002/mpr.1592

40. Pesudovs K, Burr JM, Harley C, Elliott DB. The development, assessment, and selection of questionnaires. Optom Vis Sci. 2007;84(8):663‑74. doi:10.1097/OPX.0b013e318141fe75

41. Male L, Noble A, Atkinson J, Marson T. Measuring patient experience: a systematic review to evaluate psychometric properties of patient reported experience measures (PREMs) for emergency care service provision. Int J Qual Health Care. 2017;29(3):314‑26. doi:10.1093/intqhc/mzx027

42. Oades LG, Law J, Marshall SL. Development of a consumer constructed scale to evaluate mental health service provision: consumer constructed scale. J Eval Clin Pract. 2011;17(6):1102‑7. doi:10.1111/j.1365-2753.2010.01474.x

43. Aloba O, Mapayi B, Akinsulore S, Ukpong D, Fatoye O. Trust in physician scale: factor structure, reliability, validity and correlates of trust in a sample of Nigerian psychiatric outpatients. Asian J Psychiatry. 2014;11:20‑7. doi:10.1016/j.ajp.2014.05.005

44. Anderson LA, Dedrick RF. Development of the trust in physician scale: a measure to assess interpersonal trust in patient-physician relationships. Psychol Rep. 1990;67(3 Pt 2):1091‑100.

45. Atkinson M, Sinha A, Hass S, et al. Validation of a general measure of treatment satisfaction, the treatment satisfaction questionnaire for medication (TSQM), using a national panel study of chronic disease. Health Qual Life Outcomes. 2004;2:12. doi:10.1186/1477-7525-2-12

46. Baker R. Development of a questionnaire to assess patients’ satisfaction with consultations in general practice. Br J Gen Pract J R Coll Gen Pract. 1990;40(341):487‑90.

47. Barker D, Shergill S, Higginson I, Orrell M. Patient’s views towards care received from psychiatrists. Br J Psychiatry J Ment Sci. 1996;168(5):641‑6. doi:10.1192/bjp.168.5.641

48. Barker DA, Orrell MW. The psychiatric care satisfaction questionnaire: a reliability and validity study. Soc Psychiatry Psychiatr Epidemiol. 1999;34(2):111‑6. doi:10.1007/s001270050120

49. Berghofer G, Castille DM, Link B. Evaluation of client services (ECS): a measure of treatment satisfaction for people with chronic mental illnesses. Community Ment Health J. 2011;47(4):399‑407. doi:10.1007/s10597-010-9331-3

50. Blais M, Matthews J, Lipkis-Orlando R, et al. Development and application of a brief multi-faceted tool for evaluating inpatient psychiatric care. Adm Policy Ment Health. 2002;30(2):159‑72. doi:10.1023/A:1022537218940

51. Brunero S, Lamont S, Fairbrother G. Using and understanding consumer satisfaction to effect an improvement in mental health service delivery. J Psychiatr Ment Health Nurs. 2009;16(3):272‑8. doi:10.1111/j.1365-2850.2008.01371.x

52. Bramesfeld A, Klippel U, Seidel G, Schwartz FW, Dierks ML. How do patients expect the mental health service system to act? Testing the WHO responsiveness concept for its appropriateness in mental health care. Soc Sci Med. 2007;65(5):880‑9. doi:10.1016/j.socscimed.2007.03.056

53. Caruso R, Grassi L, Biancosino B, et al. Exploration of experiences in therapeutic groups for patients with severe mental illness: development of the Ferrara group experiences scale (FE- GES). BMC Psychiatry. 2013;13:242. doi:10.1186/1471-244X-13-242

54. Eisen S, Shaul J, Leff H, Stringfellow V, Clarridge B, Cleary P. Toward a national consumer survey: evaluation of the CABHS and MHSIP instruments. J Behav Health Serv Res. 2001;28(3):347‑69. doi:10.1007/BF02287249

55. Eisen SV, Shaul JA, Clarridge B, Nelson D, Spink J, Cleary PD. Development of a consumer survey for behavioral health services. Psychiatr Serv. 1999;50(6):793‑8. doi:10.1176/ps.50.6.793

56. Eisen SV, Wilcox M, Idiculla T, Speredelozzi A, Dickey B. Assessing consumer perceptions of inpatient psychiatric treatment: the perceptions of care survey. JT Comm J Qual Improv. 2002;28(9):510‑26. doi:10.1016/s1070-3241(02)28021-9

57. Eton D, Ridgeway J, Egginton J. et al. Finalizing a measurement framework for the burden of treatment in complex patients with chronic conditions. Patient Relat Outcome Meas;2015. 117. doi:10.2147/PROM.S78955

58. Eton DT, Yost KJ, Lai J-S, et al. Development and validation of the Patient Experience with Treatment and Self-management (PETS): a patient-reported measure of treatment burden. Qual Life Res. 2017;26(2):489‑503. doi:10.1007/s11136-016-1397-0

59. Evans J, Rose D, Flach C, et al. VOICE: developing a new measure of service users’ perceptions of inpatient care, using a participatory methodology. J Ment Health. 2012;21(1):57‑71. doi:10.3109/09638237.2011.629240

60. Forouzan AS, Rafiey H, Padyab M, Ghazinour M, Dejman M, Sebastian MS. Reliability and validity of a mental health system responsiveness questionnaire in Iran. Glob Health Action. 2014;7(1):24748. doi:10.3402/gha.v7.24748

61. Garratt AM, Bjørngaard JH, Dahle KA, Bjertnæs ØA, Saunes IS, Ruud T. The Psychiatric Out-Patient Experiences Questionnaire (POPEQ): data quality, reliability and validity in patients attending 90 Norwegian clinics. Nord J Psychiatry. 2006;60(2):89‑96. doi:10.1080/08039480600583464

62. Olsen RV, Garratt AM, Iversen HH, Bjertnaes OA. Rasch analysis of the Psychiatric Out-Patient Experiences Questionnaire (POPEQ). BMC Health Serv Res. 2010;10:282. doi:10.1186/1472-6963-10-282

63. Gensichen J, Serras A, Paulitsch MA, et al. The Patient Assessment of Chronic Illness Care questionnaire: evaluation in patients with mental disorders in primary care. Community Ment Health J. 2011;47(4):447‑53. doi:10.1007/s10597-010-9340-2

64. Gigantesco A. Quality of psychiatric care: validation of an instrument for measuring inpatient opinion. Int J Qual Health Care. 2003;15(1):73‑8. doi:10.1093/intqhc/15.1.73

65. Glick I, Burti L, Suzuki K, Sacks M. Effectiveness in psychiatric care. I. A cross-national study of the process of treatment and outcomes of major depressive disorder. J Nerv Ment Dis. 1991;179(2):55‑63. doi:10.1097/00005053-199102000-00001

66. Hansson L, Höglund E. Patient satisfaction with psychiatric services: the development, reliability, and validity of two patient-satisfaction questionnaires for use in inpatient and outpatient setting. Nord J Psychiatry. 1995;49(4):257‑62. doi:10.3109/08039489509011915

67. Hester L, O’ Doherty LJ, Schnittger R, et al. SEQUenCE: a service user-centred quality of care instrument for mental health services. Int J Qual Health Care. 2015;27(4):284‑90. doi:10.1093/intqhc/mzv043

68. Howard PB, Clark JJ, Rayens MK, Hines-Martin V, Weaver P, Littrell R. Consumer satisfaction with services in a regional psychiatric hospital: a collaborative research project in Kentucky. Arch Psychiatr Nurs. 2001;15(1):10‑23. doi:10.1053/apnu.2001.20577

69. Ivarsson B, Malm U. Self-reported consumer satisfaction in mental health services: validation of a self-rating version of the UKU-consumer satisfaction rating scale. Nord J Psychiatry. 2007;61(3):194‑200. doi:10.1080/08039480701352488

70. Ahlfors UG, Lewander T, Lindström E, Malt UF, Lublin H, Malm U. Assessment of patient satisfaction with psychiatric care. Development and clinical evaluation of a brief consumer satisfaction rating scale (UKU-ConSat). Nord J Psychiatry. 2001;55 Suppl 44:71‑90.

71. Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience questionnaire: development and validation using data from in-patient surveys in five countries. Int J Qual Health Care. 2002;14(5):353‑8. doi:10.1093/intqhc/14.5.353

72. Joyce AS, Adair CE, Wild TC, et al. Continuity of care: validation of a self-report measure to assess client perceptions of mental health Service Delivery. Community Ment Health J. 2010;46(2):192‑208. doi:10.1007/s10597-009-9215-6

73. Kertesz SG, Pollio DE, Jones RN, et al. Development of the Primary Care Quality-Homeless (PCQ-H) instrument: a practical survey of homeless patients’ experiences in primary care. Med Care. 2014;52(8):734‑42. doi:10.1097/MLR.0000000000000160

74. Kolb SJ, Race KE, Seibert JH. Psychometric evaluation of an inpatient psychiatric care consumer satisfaction survey. J Behav Health Serv Res. 2000;27(1):75‑86. doi:10.1007/BF02287805

75. Larsen DL, Attkisson CC, Hargreaves WA, Nguyen TD. Assessment of client/patient satisfaction: development of a general scale. Eval Program Plann. 1979;2(3):197‑207. doi:10.1016/0149-7189(79)90094-6

76. Lelliott P, Beevor A, Hogman G, Hyslop J, Lathlean J, Ward M. Carers’ and users’ expectations of services – user version (CUES–U): a new instrument to measure the experience of users of mental health services. Br J Psychiatry. 2001;179:67‑72. doi:10.1192/bjp.179.1.67

77. Blenkiron P. What determines patients’ satisfaction with their mental health care and quality of life? Postgrad Med J. 2003;79(932):337‑40. doi:10.1136/pmj.79.932.337

78. Lloyd-Evans B, Slade M, Osborn DP, Skinner R, Johnson S. Developing and comparing methods for measuring the content of care in mental health services. Soc Psychiatry Psychiatr Epidemiol. 2011;46(3):219‑29. doi:10.1007/s00127-010-0192-4

79. MacInnes D, Beer D, Keeble P, Rees D, Reid L. The development of a tool to measure service user satisfaction with in-patient forensic services: the Forensic Satisfaction Scale. J Ment Health. 2010;19(3):272‑81. doi:10.3109/09638231003728133

80. Madan A, Fowler JC, Allen JG, et al. Assessing and addressing patient satisfaction in a longer-term inpatient psychiatric hospital: preliminary findings on the Menninger Quality of Care measure and methodology. Qual Manag Health Care. 2014;23(3):178‑87. doi:10.1097/QMH.0000000000000034

81. Mavaddat N, Lester HE, Tait L. Development of a patient experience questionnaire for primary care mental health. Qual Saf Health Care. 2009;18(2):147‑52. doi:10.1136/qshc.2007.023143

82. Mayston R, Habtamu K, Medhin G, et al. Developing a measure of mental health service satisfaction for use in low income countries: a mixed methods study. BMC Health Serv Res. 2017;17(1):183. doi:10.1186/s12913-017-2126-2

83. McGuire-Snieckus R, McCabe R, Catty J, Hansson L, Priebe S. A new scale to assess the therapeutic relationship in community mental health care: STAR. Psychol Med. 2007;37(1):85. doi:10.1017/S0033291706009299

84. Meehan T, Bergen H, Stedman T. Monitoring consumer satisfaction with inpatient service delivery: the inpatient evaluation of service questionnaire. Aust N Z J Psychiatry. 2002;36(6):807‑11. doi:10.1046/j.1440-1614.2002.01094.x

85. Misdrahi D, Verdoux H, Lançon C, Bayle F. The 4-point ordinal alliance self-report: a self-report questionnaire for assessing therapeutic relationships in routine mental health. Compr Psychiatry. 2009;50(2):181‑5. doi:10.1016/j.comppsych.2008.06.010

86. Moutoussis M, Gilmour F, Barker D, Orrel MW. Quality of care in a psychiatric out-patient department. J Ment Health. 2000;9(4):409‑20. doi:10.1080/713680257

87. Nabati L, Shea N, McBride L, Gavin C, Bauer MS. Adaptation of a simple patient satisfaction instrument to mental health: psychometric properties. Psychiatry Res. 1998;77(1):51‑6. doi:10.1016/S0165-1781(97)00122-4

88. Nordon C, Falissard B, Gerard S, et al. Patient satisfaction with psychotropic drugs: validation of the PAtient SAtisfaction with Psychotropic (PASAP) scale in patients with bipolar disorder. Eur Psychiatry. 2014;29(3):183‑90. doi:10.1016/j.eurpsy.2013.03.001

89. Rose G, Beale I, Malone J, Kinkead S, Higgin J. Validity of a scale for consumer evaluation of mental health service delivery. Int J Pers Centered Med. 2011;1(4):733‑40.

90. Ortiz G, Schacht L. Psychometric evaluation of an inpatient consumer survey measuring satisfaction with psychiatric care. The Patient. 2012;5(3):163‑73.

91. Parker G, Wright M, Robertson S, Gladstone G. The development of a patient satisfaction measure for psychiatric outpatients. Aust N Z J Psychiatry. 1996;30(3):343‑9. doi:10.3109/00048679609064997

92. Pellegrin KL, Stuart GW, Maree B, Frueh BC, Ballenger JC. A brief scale for assessing patients’ satisfaction with Care in outpatient psychiatric services. Psychiatr Serv. 2001;52(6):816‑9. doi:10.1176/appi.ps.52.6.816

93. Perreault M, Katerelos T, Sabourin S, Leichner P, Desmarais J. Information as a distinct dimension for satisfaction assessment of outpatient psychiatric services. Int J Health Care Qual Assur Inc Leadersh Health Serv. 2001;14(2‑3):111‑20.

94. Perreault M, Katerelos TE, Tardif H, Pawliuk N. Patients’ perspectives on information received in outpatient psychiatry. J Psychiatr Ment Health Nurs. 2006;13(1):110‑6. doi:10.1111/j.1365-2850.2006.00928.x

95. Peytremann-Bridevaux I, Scherer F, Peer L, et al. Satisfaction of patients hospitalised in psychiatric hospitals: a randomised comparison of two psychiatric-specific and one generic satisfaction questionnaires. BMC Health Serv Res. 2006;6:108. doi:10.1186/1472-6963-6-108

96. Phattharayuttawat S, Ngamthipwatthana T. The development of the Thai Psychiatric Satisfaction Scale. J Med Assoc Thail Chotmaihet Thangphaet. 2005;88(8):1067‑76.

97. Priebe S, Gruyters T. The role of the helping alliance in psychiatric community care. A prospective study. J Nerv Ment Dis. 1993;181(9):552‑7. doi:10.1097/00005053-199309000-00004

98. Rofail D, Gray R, Gournay K. The development and internal consistency of the satisfaction with antipsychotic medication scale. Psychol Med. 2005;35(7):1063‑72. doi:10.1017/S0033291705004526

99. Rose D, Sweeney A, Leese M, et al. Developing a user-generated measure of continuity of care: brief report. Acta Psychiatr Scand. 2009;119(4):320‑4. doi:10.1111/j.1600-0447.2008.01296.x

100. Rossberg JI, Friis S. Do the spontaneity and anger and aggression subscales of the ward atmosphere scale form homogeneous dimensions? A cross-sectional study of 54 wards for psychotic patients. Acta Psychiatr Scand. 2003;107(2):118‑23. doi:10.1034/j.1600-0447.2003.02082.x

101. Rossberg JI, Friis S. A suggested revision of the Ward Atmosphere Scale. Acta Psychiatr Scand. 2003;108(5):374‑80. doi:10.1034/j.1600-0447.2003.00191.x

102. Ruggeri M, Lasalvia A, Dall’Agnola R, et al. Development, internal consistency and reliability of the Verona Service Satisfaction Scale–European version. EPSILON Study 7. European psychiatric services: inputs linked to outcome domains and needs. Br J Psychiatry Suppl. 2000;177(39):S41‑48. doi:10.1192/bjp.177.39.s41

103. Rush B, Hansson E, Cvetanova Y, Rotondi N, Furlong A, Behrooz R. Development of a Client Perception of Care Tool for Mental Health and Addictions: Qualitative, Quantitative, and Psychometric Analysis. Final Report for the Ministry of Health and Long‐Term Care. Toronto, Canada: Health Systems and Health Equity Research, Centre for Addiction and Mental Health; 2013.

104. Schalast N, Redies M, Collins M, Stacey J, Howells K. EssenCES, a short questionnaire for assessing the social climate of forensic psychiatric wards. Crim Behav Ment Health. 2008;18(1):49‑58. doi:10.1002/cbm.677

105. Schröder A, Wilde Larsson B, Ahlström G. Quality in psychiatric care: an instrument evaluating patients’ expectations and experiences. Int J Health Care Qual Assur. 2007;20(2):141‑60. doi:10.1108/09526860710731834

106. Schröder A, Wilde Larsson B, Ahlström G, Lundqvist L. Psychometric properties of the instrument quality in psychiatric care and descriptions of quality of care among in‐patients. Int J Health Care Qual Assur. 2010;23(6):554‑70. doi:10.1108/09526861011060924

107. Schröder A, Ahlström G, Wilde-Larsson B, Lundqvist L-O. Psychometric properties of the Quality in Psychiatric Care - Outpatient (QPC-OP) instrument. Int J Ment Health Nurs. 2011;20(6):445‑53. doi:10.1111/j.1447-0349.2011.00741.x

108. Shiva A, Haden S, Brooks J. Forensic and civil psychiatric inpatients: development of the inpatient satisfaction questionnaire. J Am Acad Psychiatry Law. 2009;37(2):201‑13.

109. Slade M, Jordan H, Clarke E, et al. The development and evaluation of a five-language multi-perspective standardised measure: clinical decision-making involvement and satisfaction (CDIS). BMC Health Serv Res. 2014;14:323. doi:10.1186/1472-6963-14-323

110. Slater V, Linn M, Harris R. A satisfaction with mental health care scale. Compr Psychiatry. 1982;23(1):68‑74. doi:10.1016/0010-440X(82)90010-4

111. Speckens A, Spinhoven P, Van Hemert A, Bolk J. The Reassurance Questionnaire (RQ): psychometric properties of a self-report questionnaire to assess reassurability. Psychol Med. 2000;30(4):841‑7. doi:10.1017/S0033291799002378

112. Svedberg P, Svensson B, Arvidsson B, Hansson L. The construct validity of a self-report questionnaire focusing on health promotion interventions in mental health services. J Psychiatr Ment Health Nurs. 2007;14(6):566‑72. doi:10.1111/j.1365-2850.2007.01129.x

113. Svedberg P, Arvidsson B, Svensson B, Hansson L. Psychometric characteristics of a self-report questionnaire (HPIQ) focusing on health promotion interventions in mental health services. Int J Ment Health Nurs. 2008;17(3):171‑9. doi:10.1111/j.1447-0349.2008.00527.x

114. Uijen AA, Schellevis FG, van den Bosch WJHM, Mokkink HGA, van Weel C, Schers HJ. Nijmegen Continuity Questionnaire: development and testing of a questionnaire that measures continuity of care. J Clin Epidemiol. 2011;64(12):1391‑9. doi:10.1016/j.jclinepi.2011.03.006

115. Uijen AA, Schers HJ, Schellevis FG, Mokkink HG, van Weel C, van den Bosch WJ. Measuring continuity of care: psychometric properties of the Nijmegen Continuity Questionnaire. Br J Gen Pract. 2012;62(600):e949‑57. doi:10.3399/bjgp12X652364

116. Ul-haq I. Patients’ satisfaction with a psychiatric day hospital in the West Galway catchments area. Ir J Psychol Med. 2012;29(2):85‑90. doi:10.1017/S0790966700017353

117. Ware N, Dickey B, Tugenberg T, McHorney C. CONNECT: a measure of continuity of care in mental health services. Ment Health Serv Res. 2003;5(4):209‑21. doi:10.1023/A:1026276918081

118. Webster S. Patients’ satisfaction with mental health nursing interventions in the management of anxiety: results of a questionnaire study. J Nurs Educ Pract. 2012;2(2):52‑62.

119. Wongpakaran T, Wongpakaran N, Intachote-Sakamoto R, Boripuntakul T. The Group Cohesiveness Scale (GCS) for psychiatric inpatients. Perspect Psychiatr Care. 2013;49(1):58‑64. doi:10.1111/j.1744-6163.2012.00342.x

120. Woodring S, Polomano RC, Haagen BF, et al. Development and testing of patient satisfaction measure for inpatient psychiatry care. J Nurs Care Qual. 2004;19(2):137‑48. doi:10.1097/00001786-200404000-00011

121. Zendjidjian XY, Auquier P, Lançon C, et al. The SATISPSY-22: development and validation of a French hospitalized patients’ satisfaction questionnaire in psychiatry. Eur Psychiatry. 2015;30(1):172‑8. doi:10.1016/j.eurpsy.2014.04.002

122. Zimmerman M, Gazarian D, Multach M, et al. A clinically useful self-report measure of psychiatric patients’ satisfaction with the initial evaluation. Psychiatry Res. 2017;252:38‑44. doi:10.1016/j.psychres.2017.02.036

123. Bjertnaes O, Iversen HH, Kjollesdal J. PIPEQ-OS–an instrument for on-site measurements of the experiences of inpatients at psychiatric institutions. BMC Psychiatry. 2015;15:234. doi:10.1186/s12888-015-0621-8

124. Ruggeri M, Dall’Agnola R. The development and use of the Verona Expectations for Care Scale (VECS) and the Verona Service Satisfaction Scale (VSSS) for measuring expectations and satisfaction with community-based psychiatric services in patients, relatives and professionals. Psychol Med. 1993;23(2):511‑23. doi:10.1017/S0033291700028609

125. Wright SM, Craig T, Campbell S, Schaefer J, Humble C. Patient satisfaction of female and male users of veterans health administration services. J Gen Intern Med. 2006;21(Suppl 3):S26‑32. doi:10.1111/j.1525-1497.2006.00371.x

126. Tavakol M, Dennick R. Making sense of Cronbach’s alpha. Int J Med Educ. 2011;2:53‑5. doi:10.5116/ijme.4dfb.8dfd

127. Bjertnaes OA, Sjetne IS, Iversen HH. Overall patient satisfaction with hospitals: effects of patient-reported experiences and fulfilment of expectations. BMJ Qual Saf. 2012;21(1):39‑46. doi:10.1136/bmjqs-2011-000137

128. Bolarinwa O. Principles and methods of validity and reliability testing of questionnaires used in social and health science researches. Niger Postgrad Med J. 2015;22(4):195. doi:10.4103/1117-1936.173959

129. Streiner DL, Norman GR, Cairney J. Health Measurement Scales: A Practical Guide to Their Development and Use. 5th ed. Oxford, England: Oxford University Press; 2015.

130. LaVela SL, Gallan AS. Evaluation and measurement of patient experience. Patient Exp J. 2014;1:1.

131. Bleich SN, Ozaltin E, Murray CKL. How does satisfaction with the health-care system relate to patient experience? Bull World Health Organ. 2009;87(4):271‑8. doi:10.2471/BLT.07.050401

132. Crow R, Gage H, Hampson S, et al. The measurement of satisfaction with healthcare: implications for practice from a systematic review of the literature. Health Technol Assess Winch Engl. 2002;6(32):1‑244.

133. Sofaer S, Firminger K. Patient perceptions of the quality of health services. Annu Rev Public Health. 2005;26:513‑59. doi:10.1146/annurev.publhealth.25.050503.153958

134. Vuori H. Patient satisfaction—an attribute or indicator of the quality of care? QRB - Qual Rev Bull. 1987;13(3):106‑8. doi:10.1016/S0097-5990(16)30116-6

135. Cleary PD, McNeil BJ. Patient satisfaction as an indicator of quality care. Inq J Med Care Organ Provis Financ. 1988;25(1):25‑36.

136. Williams B, Coyle J, Healy D. The meaning of patient satisfaction: an explanation of high reported levels. Soc Sci Med. 1998;47(9):1351‑9. doi:10.1016/S0277-9536(98)00213-5

137. Sitzia J. How valid and reliable are patient satisfaction data? An analysis of 195 studies. Int J Qual Health Care J Int Soc Qual Health Care. 1999;11(4):319‑28.

138. Fitzpatrick R, Hopkins A. Problems in the conceptual framework of patient satisfaction research: an empirical exploration. Sociol Health Illn. 1983;5(3):297‑311. doi:10.1111/1467-9566.ep10491836

139. Williams B. Patient satisfaction: a valid concept? Soc Sci Med. 1994;38(4):509‑16. doi:10.1016/0277-9536(94)90247-X

140. Epstein RM, Street RL. The values and value of patient-centered care. Ann Fam Med. 2011;9(2):100‑3. doi:10.1370/afm.1239

141. Berwick DM, Nolan TW, Whittington J. The triple aim: care, health, and cost. Health Aff Proj Hope. 2008;27(3):759‑69.

142. Garratt AM, Solheim E, Danielsen K National and cross-national surveys of patient experiences: a structured review. Report n°7-2008. 2008. Oslo, Norway: Norwegian Knowledge Centre for the Health Services.

143. Australian Commission on Safety and Quality in Health Care. Review of patient experience and satisfaction surveys conducted within public and private hospital in Australia. 2012. Available from: https://www.safetyandquality.gov.au/wp-content/uploads/2012/03/Review-of-Hospital-Patient-Experience-Surveys-conducted-by-Australian-Hospitals-30-March-2012-FINAL.pdf. Accessed January 15, 2019.

144. de Silva D Measuring patient experience. Evidence scan n°18. London: The Health Foundation; 2013. Available from: https://www.health.org.uk/sites/default/files/MeasuringPatientExperience.pdf. Accessed January 25, 2019

145. Coulter A. Measuring what matters to patients. BMJ. 2017;j816. doi:10.1136/bmj.j816

146. Beattie M, Murphy DJ, Atherton I, Lauder W. Instruments to measure patient experience of healthcare quality in hospitals: a systematic review. Syst Rev. 2015;4(1):97. doi:10.1186/s13643-015-0089-0

147. Larson E, Sharma J, Bohren MA, Tunçalp Ö. When the patient is the expert: measuring patient experience and satisfaction with care. Bull World Health Organ. 2019;97(8):563‑9. doi:10.2471/BLT.18.225201

148. Hays RD, Morales LS, Reise SP. Item response theory and health outcomes measurement in the 21st century. Med Care. 2000;38(9 Suppl):II28‑II42. doi:10.1097/00005650-200009002-00007

149. Bjorner JB, Chang C-H, Thissen D, Reeve BB. Developing tailored instruments: item banking and computerized adaptive assessment. Qual Life Res Int J Qual Life Asp Treat Care Rehabil. 2007;16 Suppl 1:95‑108.

150. Kornhaber R, Walsh K, Duff J, Walker K. Enhancing adult therapeutic interpersonal relationships in the acute health care setting: an integrative review. J Multidiscip Healthc. 2016;9:537‑46. doi:10.2147/JMDH.S116957

151. Gilburt H, Rose D, Slade M. The importance of relationships in mental health care: a qualitative study of service users’ experiences of psychiatric hospital admission in the UK. BMC Health Serv Res. 2008;8:92. doi:10.1186/1472-6963-8-92

152. Berghofer G, Lang A, Henkel H, Schmidl F, Rudas S, Schmitz M. Satisfaction of inpatients and outpatients with staff, environment, and other patients. Psychiatr Serv. 2001;52(1):104‑6. doi:10.1176/appi.ps.52.1.104

153. Murray-Swank A, Glynn S, Cohen AN, et al. Family contact, experience of family relationships, and views about family involvement in treatment among VA consumers with serious mental illness. J Rehabil Res Dev. 2007;44(6):801‑11. doi:10.1682/JRRD.2006.08.0092

154. Tzelepis F, Sanson-Fisher RW, Zucca AC, Fradgley EA. Measuring the quality of patient-centered care: why patient-reported measures are critical to reliable assessment. Patient Prefer Adherence. 2015;9:831‑5. doi:10.2147/PPA.S82441

Creative Commons License © 2020 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.