Back to Journals » Patient Related Outcome Measures » Volume 10

The Adolescent Patient Experiences of Diabetes Care Questionnaire (APEQ-DC): Reliability and Validity in a Study Based on Data from the Norwegian Childhood Diabetes Registry

Authors Iversen HH , Bjertnaes O , Helland Y, Skrivarhaug T 

Received 23 September 2019

Accepted for publication 13 December 2019

Published 27 December 2019 Volume 2019:10 Pages 405—416


Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Robert Howland

Hilde Hestad Iversen,1 Oyvind Bjertnaes,1 Ylva Helland,1 Torild Skrivarhaug2,3

1Division of Health Services, Norwegian Institute of Public Health, Oslo N-0403, Norway; 2Division of Paediatric and Adolescent Medicine, The Norwegian Childhood Diabetes Registry, Oslo University Hospital, Oslo N-0424, Norway; 3Faculty of Medicine, Institute of Clinical Medicine, University of Oslo, Oslo N-0318, Norway

Correspondence: Hilde Hestad Iversen
Division of Health Services, Norwegian Institute of Public Health, PO Box 222 Skoyen, Oslo 0213, Norway
Tel +47 464 00 425
Email [email protected]

Purpose: Patient-reported experiences are a key source of information on quality in health care. Most patient experience surveys only include adults’ assessments including parent or proxy surveys in child health care settings. The aim of this study was to determine the psychometric properties of the Adolescent Patient Experiences of Diabetes Care Questionnaire, a new instrument developed to measure adolescent experiences of paediatric diabetes care at hospital outpatient departments in Norway.
Patients and Methods: The questionnaire was developed based on a literature review, qualitative interviews with adolescents, expert-group consultations, pretesting of the questionnaire and a pilot study. The pilot study involved adolescents aged 12–17 years with type 1 diabetes, sampled from the four largest paediatric outpatient departments in Norway. We assessed the levels of missing data, ceiling effects, factor structure, internal consistency, item discriminant validity and construct validity.
Results: The pilot study included responses from 335 (54%) patients. Low proportions of missing or “not applicable” responses were found for 17 of the 19 items, and 14 of these 19 items were below the ceiling-effect criterion. Five indicators were identified: consultation, information on food and physical activity/exercise, nurse contact, doctor contact and outcome. All except one indicator met the criterion of 0.7 for Cronbach’s alpha. Each of the single items had a stronger correlation with its hypothesized indicator than with any of the other indicators. The construct validity of the instrument was supported by 38 out of 45 significant associations.
Conclusion: The content validity of the instrument was secured by a rigorous development process. Psychometric testing produced good evidence for data quality, internal consistency and construct validity. Further research is needed to assess the usefulness of the Adolescent Patient Experiences of Diabetes Care Questionnaire as a basis for quality indicators.

Keywords: surveys and questionnaires, diabetes mellitus, adolescent, patient satisfaction, psychometrics


Type 1 diabetes is one of the most prevalent chronic illnesses diagnosed in childhood, and Norway has one of the highest incidences of childhood-onset type 1 diabetes in the world.1,2 Around 28,000 people (0.6% of the population) have type 1 diabetes according to calculations based on the Norwegian Prescription Database.3 Adolescence is a period when diabetes may become a daily struggle against undesirable blood glucose values and risk complications, due to this age group experiencing many challenges to adherence that are intrinsic to their developmental stage and demands for peer normalcy.4 Hormonal changes in adolescence can lead to insulin resistance, and there are also several other factors underlying poor glycemic control in this development stage.5

Patient experiences have become an important measure for health-care quality, but most questionnaire surveys only include the experiences or evaluations of adults. Patient-centred health-care services obviously involve the perspective of children and adolescents, and so quality measurements should aim to identify which aspects are important from their perspective and try to measure their experiences. A recent study found that patient care experiences are associated with adherence to recommended prevention and treatment processes; clinical outcomes, patient safety culture within hospitals and health care utilization.6

Using proxy reports as indicators of the experiences of young patients can be problematic due to the possibility of discrepancies between assessments of health-care services by children and their parents or caregivers.713 Despite growing interest in the perspectives of young patients, their voice is rarely heard in national surveys.10 A review of national surveys during 2001–2011 in England showed that the few studies addressing this issue found that young people aged 16–24 years consistently report worse health-care experiences than do older adults. From the current knowledge base, it is unsure whether—and from what age—it would be feasible to include children in patient-experience surveys. Also, it is not known whether their responses would provide additional information that would significantly augment the information that can be obtained from parents or caregivers answering on their behalf.10

A cross-sectional analysis of national survey data including both parents/caregivers and inpatients aged 8–15 years in eligible hospitals in England showed that including the latter age group in the patient-experience survey was both feasible and enhanced the information obtained from the responses of parents alone.10 A study investigating the perceived quality of diabetes care found that while there was a strong correlation between the perceptions of parents and adolescents, there are some differences between these two populations in the degree of importance that they place on different aspects.14 Another study found that the perceptions that adolescents have of outpatient care differed from those of their parents, particularly in terms of the perceived involvement in care, communication and how they viewed confidentiality.12

Norwegian children attend follow-up appointments at their local paediatric outpatient department in hospitals approximately four times yearly with their parents. These consultations are with a paediatrician and a diabetes nurse, and dieticians and psychologists can also be consulted if requested. We did not find an existing measure for assessing the experiences of young patients with outpatient diabetes care in Norway, and so a new measure was developed. The organization of the consultations differ between the paediatric outpatient departments. Some departments always provide consultations with both a paediatrician and a diabetes nurse participating. At other departments the patient always sees the paediatrician alone, and is met separately with the diabetes nurse in majority of visits. Other departments have arranged for the patient to see the paediatrician and the diabetes nurse at every second appointment. In one department, the patients only meet the paediatrician once a year. Some of the clinics provides age matched group consultation for patients in addition to individual consultations.

The aim of the present study was to determine the data quality, validity and internal consistency reliability of the newly developed Adolescent Patient Experiences of Diabetes Care Questionnaire (APEQ-DC). The instrument was developed and tested in accordance with the standard methodology of the national user-experience survey programme in Norway.1530 The questionnaire was designed to be applied in patient-experience surveys of adolescents with type 1 diabetes aged 12–17 years visiting paediatric outpatient departments in Norway.

Materials and Methods

Questionnaire Development

The development of the APEQ-DC followed the standard methodology defined as important in patient satisfaction measurements for ensuring sound psychometric properties.31 This methodology also form part of the user-experience survey programme in Norway. The conceptual approach for the program allows concurrent measurement of several components, and makes a distinction between patient-reported experiences with non-clinical issues, patient-reported outcomes and patient-reported safety.1530

The development of the questionnaire included a systematic review of the literature on existing questionnaires related to patient experiences and satisfaction, interviews with adolescent patients and consultation with an expert group. A report published in 2018 documents the development of the instrument and the data collection method.32 The process was designed to ensure that the developed questionnaire addresses important aspects of patient experiences, and to secure its content validity. The review of the literature also focused on methodological issues relevant when conducting surveys involving adolescents.

We carried out semi-structured interviews with 14 adolescents aged 12 to 18 years who received care from outpatient departments. Purposeful sampling was used with maximum variation, and this objective was communicated to the health personnel at the outpatient clinic recruiting the patients for the interviews. We tried to ensure we recruited participants who differed by age, gender, diabetes duration and age when diagnosed with diabetes. This helped to ensure a good representative of patient diversity and that the patients we interviewed were similar to those regularly seen at the clinic. We started with open questions and asked the adolescents to tell us about their daily life with diabetes, the visits at the outpatient clinic, and what help they considered important. Two senior researchers performed individual interviews separately, and both analysed the results. The expert group comprised both providers and researchers in the fields of paediatric diabetes care and three representatives from the Norwegian Diabetes Association, including two from the youth organization. The experts gave advice on the content of the questionnaire, methodological aspects related to data collection, and also highlighted the importance of ensuring that the questionnaire was as short as possible and comprised age-appropriate items.

We decided to develop a single version of the questionnaire tailored to all patients with a lower age limit of 12 years. Adolescents aged between 12 and 17 years constitute a highly inhomogeneous group with marked variations in cognitive and social development.33 The differences in their cognitive abilities might involve memory, accuracy of recalled information, comprehension and reading abilities, and so the importance of questions that were simple both in wording and structure was emphasized.

The resulting questionnaire was tested in 14 interviews with adolescents aged 11 to 18 years who visited paediatric outpatient departments. The interviews tested the overall content, relevance of topics and single items, structure, response scales and length, while focusing also on question comprehension and clarity of language.

The questionnaire tested in the pilot survey comprised 30 items, most of which were specific questions about experiences with different aspects of outpatient appointments. The questions addressed arrival and waiting experiences, nursing services, doctor services, the consultation, information and counselling on food and physical activity/exercise, equipment, access to consultations with dietitians and psychologists, parent involvement and perceived outcome. The questionnaire also included three items related to sociodemographic characteristics.

Most of the experience items were scored on a 5-point response scale ranging from 1 (“not at all”) to 5 (“to a very large extent”), with smiley faces used to illustrate the response options. The five-point response scale has been chosen to be consistently applied in the surveys carried out by The Norwegian Institute of Public Health (NIPH), making it possible to compare over time and, to some extent between different groups of health care users.1530 Most questionnaires within the field of patient experiences and patient satisfaction have used items with all-point-defined scales where each scale point has a descriptor.31,34

Data Collection

The questionnaire was tested in a pilot study including patients at the four largest outpatient departments in Norway aged 12–17 years with type 1 diabetes registered in the Norwegian Childhood Diabetes Registry (NCDR), which is population-based national medical quality registry. The NCDR includes all new cases of childhood-onset diabetes reported from all paediatric departments in Norway when written informed consent has been received from the child and/or the child’s parents (depending on the age of the patient). An eligibility criterion was that every included patient had attended at least one outpatient consultation during the previous year. The survey was conducted by the NIPH and commissioned by the NCDR.

The NCDR transferred data about the sample to the NIPH, including contact information as a basis for conducting the survey. The sample was contacted by postal mail in April 2017. The putative participants were sent a letter with information about the survey, a printed version of the questionnaire, a prepaid return envelope and also an option to answer electronically. Two postal reminders were sent to non-responders. Background data were transferred from the NCDR to the NIPH after data collection was completed, but for some of the patients background data were not complete or available at the time of the transfer.

The study was carried out in accordance with the principles of the Declaration of Helsinki.

Statistical Analysis

Missing-item and ceiling effects were assessed. In the national survey program, items with missing data or “not applicable” responses >20% are usually considered for removal.1530 Also, a ceiling effect occurs when a large number of patients score at or near the upper limit of the potential responses, and a larger ceiling effect reduces the possibility of measuring improvement or excellence and might reduce the validity and reliability of the findings.35 We set the cut-off to 50%, in that an item was considered acceptable if fewer than 50% of the respondents had ticked the most-favourable response option.26,27,36

Exploratory factor analysis (EFA) with principal-axis factoring was conducted to assess the underlying factor structure of the APEQ-DC. An oblique rotation that allows factors to be correlated was applied. Kaiser’s criterion was used to determine the number of factors to be rotated, based on the retention of factors with eigenvalues above 1. We excluded items with factor loadings below 0.4, and no cross-loadings exceeding 0.3 were retained. The analysis and resulting indicators were not only based on statistical testing and psychometrics, but also on theoretical considerations. The organization of outpatient departments in Norway differs between institutions, and the eight items concerning nurse and doctor contacts were analysed separately in order to ensure that the results would be useful in local quality-improvement initiatives. Theoretical considerations also provided justification to separate the outcome item from the process and structure items in the EFAs.

The internal consistency reliability of the indicators was assessed by calculating the item-total correlation coefficient and Cronbach’s alpha. The item-total correlation coefficient quantifies the strength of an association between an item and the remainder of its indicator, with a coefficient of 0.4 considered acceptable.37 Cronbach’s alpha assesses the overall correlation between items within an indicator, and an alpha value of 0.7 is considered satisfactory.37,38

Discriminant validity was investigated using Pearson’s correlation coefficient. It was hypothesized that each single item would be correlated more strongly with its own indicator than with other indicators.39

Construct validity was explored through comparisons of indicator scores and responses to additional questions from the pilot survey as well as background variables transferred from the NCDR. No obvious variables for inclusion in the construct validity testing for this specific age group and with type 1 diabetes were found in our review of the relevant literature. A systematic review of patient satisfaction measurements in general found that age and health status were relevant across populations, but the review did not include studies that involved adolescents.34 A systematic review of the perspectives of young people on health care with a focus on identifying indicators of adolescent-friendly health care found that the important aspects were the accessibility of health care, staff attitude, communication, guideline-driven care, age-appropriate environment (e.g., waiting time and continuity of care), involvement in health care and health outcomes.40

Consequently, the present testing adopted an exploratory approach, and we explored the influence of several potentially important variables in the patient-satisfaction literature on adults. We anticipated that the scores would be associated with gender, waiting time, continuity of care, accessibility, who completed the questionnaire and the self-reported general condition of the patient.17,18,22,29,30,34 Correlations between these selected variables and the indicators were assessed with Spearman’s rank correlation coefficients (r values) for continuous variables, and the independent-samples t-test and one-way ANOVA for categorical variables.

All statistical analyses were conducted using SPSS version 23.0.


The survey initially included 685 patients, of which 60 were excluded because of incorrect addresses. Questionnaire responses were received from 335 (53.6%) patients. Only 12% of the respondents chose the electronic response option. A total of 24.5% of the patients responded to the questionnaire with their parents, 70.7% responded alone, and 4.8% of the responses were from the parents alone.

Table 1 lists the characteristics of the respondent. Shared residence and the general condition were obtained as self-reported data from the questionnaire, while the other variables were transferred from the NCDR. Fifty-one percent of the respondents were males, and their mean age was 14.8 years. As indicated in Table 1, the mean age when diagnosed with type 1 diabetes was 9.0 years, and the mean HbA1c level at the last registration in the NCDR was 8.2% (66.1 mmol/mol). Forty-five percent of the respondents had attended more than six consultations during the previous year. Almost every fifth patient had a shared-residence arrangement, 78.6% reported that they felt rather good or very good, and 7.6% were not Norwegian. As mentioned previously, for some of the patients background data were not complete or available at the time of the transfer.

Table 1 Sample Characteristics (N=335)

The proportions of missing data, “not applicable/don’t know” responses, mean values and ceiling effect for the items are presented in Table 2. Only items relevant to include in the psychometric testing are presented. The proportion of missing data ranged from 0.0% to 0.6%. Responses in the “not applicable/don’t know” category ranged from 0.6% to 13.7%. The items had scores in the range of 3.19–4.70 (on the scale from 1 to 5). Fourteen of the 19 items that were relevant to include in the factor analysis were below the ceiling-effect criterion (<50% in the most-positive response option). For the remaining patient experience items the proportion of missing data ranged from 0.3% to 1.0% (results not shown).

Table 2 Item Descriptives

All 19 items that were relevant to include in the factor analysis had low proportions of missing and “not applicable/don’t know” responses (<14%), and no items were excluded due to missing values. Both preliminary empirical testing and theoretical considerations gave arguments for analysing the eight items concerning nurse and doctor contacts separately in order to ensure that the results would be useful in local quality-improvement initiatives. The organization of outpatient departments in Norway differs between institutions, and mixing of the nurse and doctor contact items might make the results less interpretable. Theoretical considerations also provided justification to separate the outcome item from the process and structure items in the EFAs.

The first EFA included the ten items on structures and processes, and yielded two factors. One item (waiting) was excluded due to low loading on its own factor (<0.4). The final EFA of the structure and process items yielded the following two factors with eigenvalues of >1 that accounted for 53.1% of the total variance (Table 3): (i) the consultation and (ii) information on food and physical activity/exercise.

Table 3 Factor Loadings and Reliability Statistics

The second EFA included the four items related to nurse contact, and yielded one factor. One item (same nurses) was excluded from the factor analysis due to low factor loading on its own factor (<0.4). The final solution included three items with eigenvalues of >1 that explained 63.3% of the total variance (Table 3).

The third and final EFA included the four items concerning doctor contact, and also yielded one factor. In line with the results from the analysis of the items related to nurse contact, one item (same doctor) was excluded due to low factor loading on its own factor (<0.4). Finally, doctor contact comprised one indicator that accounted for 68.2% of the total variance (Table 3).

Table 3 indicates that the item-total correlations for the four indicators ranged from 0.45 to 0.69, and hence all exceeded the accepted cut-off of 0.4 for indicating that each item was related to the overall indicator. Cronbach’s alpha values exceeded the criterion of 0.7 for all indicators except nurse contact, indicating good internal consistency (the alpha value for nurse contact was 0.69, and hence was very close to the criterion of 0.7). The values for the three other indicators ranged from 0.75 to 0.82.

All items were correlated more strongly with their own indicator than with other indicators, with the item-to-own-indicator correlation coefficients ranging between 0.62 and 0.93 (Table 4). All of the correlations were statistically significant (p<0.001).

Table 4 Correlations Between Items and Indicators

Table 5 indicates that 38 of the 45 tests of construct validity were statistically significant and supported the hypothesized associations. Males had significantly higher scores than females on all except one indicator (nurse contact), with differences ranging from 4 to 13 points on a scale from 0 to 100 (where 100 is the best score).

Table 5 Construct Validity Testing: Associations Between Indicators, Background Variables/Data of the Patient and Responses to Individual Questions in the Questionnaire

The five indicator scores had weak but statistically significant correlations with single items related to waiting time, same nurses and same doctor, with coefficients ranging from 0.12 to 0.25. The only exception was the association with the item related to contact with the same doctor and the nurse contact indicator, this correlation was not significant. The reported waiting time was negatively correlated with all indicator scores. Better experiences of patient–nurse or patient–doctor continuity of care were associated with higher indicator scores.

There were marked differences between patients who considered the number of consultations to be appropriate and those who considered that there were either too few or too many consultations. Those who considered the number of consultations as appropriate had the highest indicator scores, and accordingly the best experiences.

Scores were also significantly correlated with both access to a dietitian and access to a psychologist. The results showed that a better perception of accessibility was associated with higher scores on all indicators. However, two of the five differences were not significant for each of the two accessibility items.

The item about who completed the questionnaire was significant for all but the outcome indicator. The scores were higher when the patient had completed the questionnaire alone than when the patient and her/his parents had completed it together.

The self-reported general condition today had weak or moderate statistically significant correlations with all five indicators (correlation coefficients: 0.25–0.33). Better reported general condition was associated with higher indicator scores.


The APEQ-DC was developed using a standardized and comprehensive process. The psychometric testing of the instrument produced good evidence for data quality, internal consistency and construct validity. The content validity was secured by a development process with the relevant age group, and the instrument was tested in cognitive interviews and a pilot survey. The review of the literature revealed a lack of comparable studies, which makes it difficult to compare the results with other findings.

To our knowledge there have been few national surveys of the experiences that young people have of health care. The literature search revealed little articulation of domains and indicators of the quality of health care for adolescents. The views of children or adolescents have therefore largely been ignored, with parents often acting as proxies in completing questionnaires, despite research showing that the evaluations of the quality of care from the perspectives of young people can differ from those of their parents or caregivers.713 There is increasing evidence indicating that adolescents may be willing to respond to surveys from the age of 8 years onwards and that their health-care priorities diverge from those of their parents from the age of 12 years.10,40,41 The current study found that the included adolescents had a high level of engagement, with the survey being completed independently by 70.7% of the respondents. The instrument was accessible to its target audience and showed high rates of completion and low proportions of missing items, indicating a minimal burden on the respondents. The low proportion of “not applicable” responses indicates that the questionnaire was a relevant instrument for most of the patients.

We conducted separate factor analyses for the outcome item and items related to structure and process in order to avoid contamination between different aspects of the quality of care. Patient-reported experiences usually address the structures and processes of health care, but the APEQ-DC also includes an indicator addressing perceived outcome. The outcome item addressed if the follow-up at the outpatient departments had helped the patients with their diabetes. This aspect was emphasized as being very important by the patients in the interviews, as well as by the expert group. However, measurement properties are worse for single items than for multi-item scales.35 The review of the literature did not identify any relevant studies of the optimal length of a questionnaire for this age group, but both the present patients and experts emphasized the desirability of the questionnaire being as short as possible. Considering the importance stated by the patients and experts as well as the efforts made to design a user-friendly and short questionnaire, we decided to keep the item representing the fifth indicator. Further development of the instrument should consider if additional outcome items could be added to strengthen its reliability.

Low response rates have implications for the representativeness of data, but the minimum acceptable response rate for satisfaction surveys is not clear.42 Few studies have been carried out in populations relevant to the current study. The response rates in two recent surveys in England involving children and adolescents from 8 years were 27% and 32%.10,13 The response rate in the present survey was 54% after two reminders, which was therefore considered satisfactory. We contacted the sample by post, and only 12% of the responders chose the electronic response option. The results from previous randomized studies and studies of survey-mode preferences in different patient populations also indicate that there is a rather modestly developed web mode preference overall.43,44 However, the patients in the current sample is a young population probably with high Internet literacy, and alternative methodologies for administering a survey electronically should be considered. The technology available for data collection might have affected the response rate, but also the number of adolescents that chose the electronic response option. The amount of electronic responses might have been substantially different with contacts direct through secure online systems, but this option was not available in this study.

Questionnaires should survey specific care experiences rather than overall satisfaction, since the latter is highly subjective.6 A review of adolescent adherence in type 1 diabetes revealed few provider-based interventions aimed at improving adolescent adherence to therapy, although there is evidence that providers of paediatric health care are uniquely positioned to improve adherence in their patients.5 Another study plus a review concluded that data from large-scale surveys of user experiences are used for local quality improvement work in the health services, but that there is a need for systematic guidance on how to use data in this area.45,46 The results from the psychometric testing of the APEQ-DC showed that this questionnaire consistently discriminates between different aspects of patient experiences, suggesting that there were five indicators. When conducting user-experience surveys it is essential that the survey tools and methods provide feedback that is sufficiently specific and can be acted on.

Including feedback from adolescents does not diminish the relevance of also asking their parents. Parents form an integral part of the treatment, and their experiences should also be included when assessing the quality of diabetes care. The treatment for diabetes are complex, and parents are often involved in medical-care contacts even when the patients are of adolescent age. Paediatricians communicate with both parents and patients and therefore must communicate effectively with both groups, who may require varying levels of information and communication.5 The current project involved the development and validation of an instrument measuring the experiences of parents with paediatric diabetes care, the PEC-DC (Parent Experiences of Diabetes Care Questionnaire), and a national survey among parents conducted in 2016 and 2017.44,47 An article published recently focus on the level of agreement between parents and adolescents about their experiences with outpatient departments.48 Understanding the correspondence between the viewpoints of parents and adolescents might be useful for informing interventions aimed at improving the health care provided at outpatient departments.

The present research has highlighted the importance and relevance of including adolescents themselves in giving feedback on health-care issues and informing health-care services about problem areas and possible improvement priorities. The APEQ-DC provides feedback in specific areas that hopefully can be acted on. The results can be used to monitor performance and help outpatient departments to identify areas where the quality should be improved from the perspective of the patient.

We found little published evidence that adolescents are routinely asked to respond to surveys about experiences of health care. Further research should explore how to improve both the quality and quantity of survey data to better understand and measure the experiences that adolescents have of health care.

The results obtained in this study should be validated in follow-up surveys. A key research question for the future is whether monitoring and improving the experiences of adolescents can promote patient-centred care and also affect clinical measures such as the HbA1c level, and accordingly improve long-term health outcomes.

Strengths and Limitations

This study focused on the own experiences of adolescents with health care provided at outpatient departments, and the findings demonstrate that the newly developed questionnaire reliably measures these experiences and is feasible for collecting survey data.

A strength of this study is that it was performed by third parties (the NCDR and the NIPH) that are not involved in providing health care. The variables tested in this study included self-reported data obtained using the questionnaire, but also administrative and clinical variables obtained from outpatient departments.

A potential source of bias in this study was that no background data on non-responders were available. Further research should explore comparisons of respondents and non-respondents to assess if the latter have experiences that differ from those who choose to respond. Furthermore, the generalizability of the results to all outpatient departments in Norway is uncertain since only four departments were included.


The APEQ-DC comprises five indicators with good internal consistency reliability and validity, and is recommended for future assessments of patient experiences of outpatient paediatric departments in Norway.

There is a lack of focus of current research on the experiences and views of adolescents. Further research is needed to better understand adolescents as patients, since their specific needs may impact their health as well as the future use of services.


APEQ-DC, Adolescent Patient Experiences of Diabetes Care Questionnaire; PEQ-DC, Parent Experiences of Diabetes Care Questionnaire; NCDR, The Norwegian Childhood Diabetes Registry; NIPH, The Norwegian Institute of Public Health; EFA, Exploratory factor analysis.

Ethics Approval and Informed Consent

The study was approved by the Data Protection Authority at Oslo University Hospital. Registration in the NCDR is based on receiving written informed consent from the child (above 12 years of age) and/or the child’s parents. The consent form informs the patient and/or the parents that consent may result in requests to answer questionnaires on patient experiences. Returning the questionnaire constituted patient consent in the survey, which is the standard procedure in all national patient-experience surveys conducted by the NIPH. This study was outside the scope of formal ethical review in Norway.

Consent for Publication

Not applicable.

Data Sharing Statement

The datasets generated and/or analysed during the current study are not publicly available due to protection of personal data.


We thank Hilde Bjørndalen at the Department of Paediatric Medicine at Oslo University Hospital for organizing patients for the interviews, for participating in the expert group and for contributing to financing the project through “Lillian and Werner Næss legat”. We also thank Inger Opedal Paulsrud, Olaf Holmboe, Johanne Kjøllesdal and Nam Pham from the Knowledge Centre at the NIPH for their help in developing and conducting the survey, including performing administrative and technical tasks in data collection. We further thank Ann Kristin Drivvoll from the NCDR for extracting data from the registry. Finally, we sincerely thank the patients for participating in the survey.

Author Contributions

All authors contributed to data analysis, drafting and revising the article, gave final approval of the version to be published, and agree to be accountable for all aspects of the work.


The authors report no conflicts of interest in this work.


1. Soltész G, Patterson C, Dahlquist G. Global trends in childhood type 1 diabetes. Diabetes Atlas. 2006;3.

2. Skrivarhaug T, Stene LC, Drivvoll AK, Strøm H, Joner G; Norwegian Childhood Diabetes Study Group. Incidence of type 1 diabetes in Norway among children aged 0–14 years between 1989 and 2012: has the incidence stopped rising? Results from the Norwegian Childhood Diabetes Registry. Diabetologia. 2014;57(1):57–62. doi:10.1007/s00125-013-3090-y

3. Strøm H, Selmer R, Birkeland KI, et al. No increase in new users of blood glucose-lowering drugs in Norway 2006–2011: a nationwide prescription database study. BMC Public Health. 2014;14(1):520. doi:10.1186/1471-2458-14-520

4. Borus JS, Laffel L. Adherence challenges in the management of type 1 diabetes in adolescents: prevention and intervention. Curr Opin Pediatr. 2010;22(4):405–411. doi:10.1097/MOP.0b013e32833a46a7

5. Datye KA, Moore DJ, Russell WE, Jaser SS. A review of adolescent adherence in type 1 diabetes and the untapped potential of diabetes providers to improve outcomes. Curr Diab Rep. 2015;15(8):51. doi:10.1007/s11892-015-0621-6

6. Anhang Price R, Elliott MN, Zaslavsky AM, et al. Examining the role of patient experience surveys in measuring health care quality. Med Care Res Rev. 2014;71(5):522–554. doi:10.1177/1077558714541480

7. Chesney M, Lindeke L, Johnson L, et al. Comparison of child and parent satisfaction ratings of ambulatory pediatric subspecialty care. J Pediatr Health Care. 2005;19(4):221–229. doi:10.1016/j.pedhc.2005.02.003

8. Sawyer SM, Ambresin A-E, Bennett KE, Patton GC. A measurement framework for quality health care for adolescents in hospital. J Adolesc Health. 2014;55(4):484–490. doi:10.1016/j.jadohealth.2014.01.023

9. Hargreaves DS, Viner RM. Children’s and young people’s experience of the National Health Service in England: a review of national surveys 2001–2011. Arch Dis Child. 2012;97(7):661–666. doi:10.1136/archdischild-2011-300603

10. Hargreaves DS, Sizmur S, Pitchforth J, et al. Children and young people’s versus parents’ responses in an English national inpatient survey. Arch Dis Child. 2018;103:486–491. doi:10.1136/archdischild-2017-313801

11. Toomey SL, Zaslavsky AM, Elliott MN, et al. The development of a pediatric inpatient experience of care measure: child HCAHPS. Pediatrics. 2015;136(2):360–369. doi:10.1542/peds.2015-0966

12. Byczkowski TL, Kollar LM, Britto MT. Family experiences with outpatient care: do adolescents and parents have the same perceptions? J Adolesc Health. 2010;47(1):92–98. doi:10.1016/j.jadohealth.2009.12.005

13. Hopwood B, Tallett A. Little voice: giving young patients a say. Nurs Times. 2011;107(49–50):18–20.

14. Hanberger L, Ludvigsson J, Nordfeldt S. Quality of care from the patient’s perspective in pediatric diabetes care. Diabetes Res Clin Pract. 2006;72(2):197–205. doi:10.1016/j.diabres.2005.10.009

15. Pettersen KI, Veenstra M, Guldvog B, Kolstad A. The patient experiences questionnaire: development, validity and reliability. Int J Qual Health Care. 2004;16(6):453–463. doi:10.1093/intqhc/mzh074

16. Oltedal S, Garratt A, Bjertnaes O, Bjørnsdottìr M, Freil M, Sachs M. The NORPEQ patient experiences questionnaire: data quality, internal consistency and validity following a Norwegian inpatient survey. Scand J Public Health. 2007;35(5):540–547. doi:10.1080/14034940701291724

17. Iversen HH, Holmboe O, Bjertnaes OA. The Cancer Patient Experiences Questionnaire (CPEQ): reliability and construct validity following a national survey to assess hospital cancer care from the patient perspective. BMJ Open. 2012;2:5. doi:10.1136/bmjopen-2012-001437

18. Garratt AM, Bjertnaes OA, Barlinn J. Parent experiences of paediatric care (PEPC) questionnaire: reliability and validity following a national survey. Acta Paediatr. 2007;96(2):246–252. doi:10.1111/j.1651-2227.2007.00049.x

19. Bjertnaes O, Iversen HH, Kjøllesdal J. PIPEQ-OS–an instrument for on-site measurements of the experiences of inpatients at psychiatric institutions. BMC Psychiatry. 2015;15:234. doi:10.1186/s12888-015-0621-8

20. Garratt A, Bjørngaard JH, Dahle KA, Bjertnaes OA, Saunes IS, Ruud T. The Psychiatric Out-Patient Experiences Questionnaire (POPEQ): data quality, reliability and validity in patients attending 90 Norwegian clinics. Nord J Psychiatry. 2006;60(2):89–96. doi:10.1080/08039480600583464

21. Garratt A, Bjertnaes OA, Holmboe O, Hanssen-Bauer K. Parent experiences questionnaire for outpatient child and adolescent mental health services (PEQ-CAMHS outpatients): reliability and validity following a national survey. Child Adolesc Psychiatry Ment Health. 2011;5:18. doi:10.1186/1753-2000-5-18

22. Garratt AM, Danielsen K, Forland O, Hunskaar S. The Patient Experiences Questionnaire for Out-of-Hours Care (PEQ-OHC): data quality, reliability, and validity. Scand J Prim Health Care. 2010;28(2):95–101. doi:10.3109/02813431003768772

23. Sjetne IS, Iversen HH, Kjøllesdal JG. A questionnaire to measure women’s experiences with pregnancy, birth and postnatal care: instrument development and assessment following a national survey in Norway. BMC Pregnancy Childbirth. 2015;15:182. doi:10.1186/s12884-015-0611-3

24. Bjertnaes OA, Garratt A, Nessa J. The GPs’ Experiences Questionnaire (GPEQ): reliability and validity following a national survey to assess GPs’ views of district psychiatric services. Fam Pract. 2007;24(4):336–342. doi:10.1093/fampra/cmm025

25. Sjetne IS, Bjertnaes OA, Olsen RV, Iversen HH, Bukholm G. The Generic Short Patient Experiences Questionnaire (GS-PEQ): identification of core items from a survey in Norway. BMC Health Serv Res. 2011;11:88. doi:10.1186/1472-6963-11-88

26. Haugum M, Iversen HH, Bjertnaes O, Lindahl AK. Patient experiences questionnaire for interdisciplinary treatment for substance dependence (PEQ-ITSD): reliability and validity following a national survey in Norway. BMC Psychiatry. 2017;17(1):73. doi:10.1186/s12888-017-1242-1

27. Bjertnaes OA, Lyngstad I, Malterud K, Garratt A. The Norwegian EUROPEP questionnaire for patient evaluation of general practice: data quality, reliability and construct validity. Fam Pract. 2011;28(3):342–349. doi:10.1093/fampra/cmq098

28. Olsen RV, Garratt AM, Iversen HH, Bjertnaes OA. Rasch analysis of the Psychiatric Out-Patient Experiences Questionnaire (POPEQ). BMC Health Serv Res. 2010;10:282. doi:10.1186/1472-6963-10-282

29. Holmboe O, Iversen HH, Danielsen K, Bjertnaes O. The Norwegian patient experiences with GP questionnaire (PEQ-GP): reliability and construct validity following a national survey. BMJ Open. 2017;7(9):e016644. doi:10.1136/bmjopen-2017-016644

30. Garratt AM, Bjertnaes OA, Krogstad U, Gulbrandsen P. The OutPatient Experiences Questionnaire (OPEQ): data quality, reliability, and validity in patients attending 52 Norwegian hospitals. Qual Saf Health Care. 2005;14(6):433–437. doi:10.1136/qshc.2005.014423

31. Sitzia J. How valid and reliable are patient satisfaction data? An analysis of 195 studies. Int J Qual Health Care. 1999;11(4):319–328. doi:10.1093/intqhc/11.4.319

32. Iversen HH, Helland Y, Skrivarhaug T. 2018. Development of a Method for Measuring Parent and Patient Experiences of Outpatient Care for Children with Type 1 Diabetes. Oslo: The Norwegian Institute of Public Health, PasOpp-report 37. ISSN: 1890-1565.

33. De Leeuw E Improving data quality when surveying children and adolescents: cognitive and social development and its role in questionnaire construction and pretesting. Report prepared for the Annual Meeting of the Acedemy of Finland: research Programs Public Health Challenges and Health and Welfare of Children and Young People. Helsinki: Academy of Finland; 2011. Available from:

34. Crow R, Gage H, Hampson S, et al. The measurement of satisfaction with healthcare: implications for practice from a systematic review of the literature. Health Technol Assess. 2002;6(32):1–244. doi:10.3310/hta6320

35. Streiner DL, Norman GR. Health Measurement Scales—A Practical Guide to Their Development and Use. 4th ed. New York: Oxford University Press; 2008.

36. Ruiz MA, Pardo A, Rejas J, Soto J, Villasante F, Aranguren JL. Development and validation of the “Treatment satisfaction with medicines questionnaire” (SATMED-Q). Value Health. 2008;11(5):913–926. doi:10.1111/j.1524-4733.2008.00323.x

37. Nunnally JC, Bernstein IH. Psychometric Theory. 3rd ed. New York: McGraw-Hill; 1994.

38. Kline RB. Principles and Practice of Structural Equation Modeling. New York: Guildford; 2005.

39. Gandek B, Ware JE Jr, Aaronson NK, et al. Tests of data quality, scaling assumptions, and reliability of the SF-36 in eleven countries: results from the IQOLA project. International quality of life assessment. J Clin Epidemiol. 1998;51(11):1149–1158. doi:10.1016/S0895-4356(98)00106-1

40. Ambresin A-E, Bennett K, Patton GC, Sanci LA, Sawyer SM. Assessment of youth-friendly health care: a systematic review of indicators drawn from young people’s perspectives. J Adolesc Health. 2013;52(6):670–681. doi:10.1016/j.jadohealth.2012.12.014

41. Santelli JS, Rosenfeld WD, DuRant RH, et al. Guidelines for adolescent health research: a position paper of the Society for Adolescent Medicine. J Adolesc Health. 1995;17(5):270–276. doi:10.1016/1054-139X(95)00181-Q

42. Sitzia J, Wood N. Response rate in patient satisfaction research: an analysis of 210 published studies. Int J Qual Health Care. 1998;10(4):311–317. doi:10.1093/intqhc/10.4.311

43. Bjertnaes O, Iversen HH, Skrivarhaug T. A randomized comparison of three data collection models for the measurement of parent experiences with diabetes outpatient care. BMC Med Res Methodol. 2018;18(1):95. doi:10.1186/s12874-018-0557-z

44. Bjertnaes OA, Iversen HH. User-experience surveys with maternity services: a randomized comparison of two data collection models. Int J Qual Health Care. 2012;24(4):433–438. doi:10.1093/intqhc/mzs031

45. Haugum M, Danielsen K, Iversen HH, Bjertnaes O. The use of data from national and other large-scale user experience surveys in local quality work: a systematic review. Int J Qual Health Care. 2014;26(6):592–605. doi:10.1093/intqhc/mzu077

46. Iversen HH, Bjertnaes OA, Groven G, Bukholm G. Usefulness of a national parent experience survey in quality improvement: views of paediatric department employees. Qual Saf Health Care. 2010;19(5):e38.

47. Iversen HH, Helland Y, Bjertnaes O, Skrivarhaug T. Parent experiences of diabetes care questionnaire (PEQ-DC): reliability and validity following a national survey in Norway. BMC Health Serv Res. 2018;18(1):774. doi:10.1186/s12913-018-3591-y

48. Iversen HH, Bjertnaes O, Skrivarhaug T. Associations between adolescent experiences, parent experiences and HbA1c: results following two surveys based on the Norwegian Childhood Diabetes Registry (NCDR). BMJ Open. 2019;9(11):e032201. doi:10.1136/bmjopen-2019-032201

Creative Commons License © 2019 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.