Back to Journals » Patient Preference and Adherence » Volume 17

Preferences in the Design and Delivery of Neurodevelopmental Follow-Up Care for Children: A Systematic Review of Discrete Choice Experiments

Authors Sharma P , Kularatna S, Abell B, Eagleson K, Vo LK, Halahakone U , Senanayake S, McPhail SM 

Received 12 June 2023

Accepted for publication 31 August 2023

Published 19 September 2023 Volume 2023:17 Pages 2325—2341

DOI https://doi.org/10.2147/PPA.S425578

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Jongwha Chang



Pakhi Sharma,1 Sanjeewa Kularatna,1 Bridget Abell,1 Karen Eagleson,2,3 Linh K Vo,1 Ureni Halahakone,1 Sameera Senanayake,1 Steven M McPhail1,4

1Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, Brisbane, QLD, Australia; 2Queensland Paediatric Cardiac Service, Queensland Children’s Hospital, Brisbane, QLD, Australia; 3Faculty of Medicine, The University of Queensland, Brisbane, QLD, Australia; 4Digital Health and Informatics Directorate, Metro South Health, Brisbane, QLD, Australia

Correspondence: Pakhi Sharma, Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Queensland University of Technology, 60 Musk Avenue, Kelvin Grove, Brisbane, QLD, 4059, Australia, Tel +61 401829045, Email [email protected]

Abstract: Neurodevelopmental disorders are a significant cause of morbidity. Early detection of neurodevelopmental delay is essential for timely diagnosis and intervention, and it is therefore important to understand the preferences of parents and clinicians for engaging with neurodevelopmental surveillance and follow-up care. Discrete choice experiment (DCE) may be an appropriate method for quantifying these preferences. This review systematically examined how DCEs have been designed and delivered in studies examining neurodevelopmental care of children and identified the preferred attributes that have been reported. PubMed, Embase, CINAHL, and Scopus databases were systematically searched. Studies were included if they used DCE to elicit preferences for a neurodevelopmental follow-up program for children. Two independent reviewers conducted the title and abstract and full-text screening. Risk of bias was assessed using a DCE-specific checklist. Findings were presented using a narrative synthesis. A total of 6618 records were identified and 16 papers were included. Orthogonal (n=5) and efficient (n=5) experimental designs were common. There was inconsistent reporting of design-related features. Analysis was primarily completed using mixed logit (n=6) or multinomial logit (n=3) models. Several key attributes for neurodevelopmental follow-up care were identified including social, behavioral and emotional support, therapy, waiting time, and out-of-pocket costs. DCE has been successfully used as a preference elicitation method for neurodevelopmental-related care. There is scope for improvement in the design and analysis of DCE in this field. Nonetheless, attributes identified in these studies are likely to be important considerations in the design and implementation of programs for neurodevelopmental care.

Keywords: preferences, discrete choice experiment, attributes, neurodevelopment, follow-up

Introduction

Neurodevelopmental disorders encompass a range of conditions that affect early development and functioning of the brain. Difficulties may arise in social, cognitive, or emotional domains and include diagnoses such as attention-deficit/hyperactive disorder (ADHD) and autism spectrum disorder (ASD).1 Consequently, neurodevelopmental disorders can cause significant morbidity in children with subsequent impact on families and incur substantial costs to healthcare and education systems.2 A survey from the United States reported that 15% of children aged 3 to 17 years were affected by neurodevelopmental disorders.3 Along with genetic and environmental factors, critical illnesses early in life, including congenital heart disease and prematurity, are also a known risk factor for neurodevelopmental delays.4–8 Early detection of neurodevelopmental delay is important for timely diagnosis and intervention, and it is therefore important to understand the preferences of parents and clinicians for engaging with neurodevelopmental services and follow-up care.8–10 The use of the term “neurodevelopmental follow-up care” in this manuscript refers to the care provided to children and their families over time after diagnosing the neurodevelopmental disorder or delay in order to monitor the medical and developmental outcomes and therefore provide suitable support.11,12 Knowing what parents and clinicians prefer, in terms of neurodevelopmental follow-up care, is likely to aid in the design of services to enhance uptake and acceptability.

There are different methods of understanding patient preferences, both qualitative and quantitative. Some studies have used surveys, interviews, and focus groups to identify preferences in neurodevelopmental follow-up care;13–17 however, the experimental design and analysis methods in these studies were not robust for quantifying preferences in this context. Regarding care for children with neurodevelopmental disorders, there may be many important attributes and the weighted preferences for each of these cannot be readily captured and disentangled using these methods for the purpose of informing health service design. Consequently, quantitative choice methods may be suitable to use in this context as they may be more reliable at predicting authentic behavior18,19 and can provide a robust method to capture and quantify the most relevant preferences.

Discrete choice experiment (DCE) is a quantitative choice method, where individuals are presented several hypothetical health scenarios (choice sets), each containing several alternatives with different attributes for the individuals to choose from.20 A DCE can be designed using the following key steps: identifying whether an experiment is labelled or unlabelled; identifying and finalising attributes; identifying the attribute levels; identifying the number of choice sets in experimental design; selecting a suitable experimental design strategy; conducting a pilot study; conducting the main study; and analysing the results using a suitable analysis method.21,22 DCE is considered a robust technique that uses the ‘framework of rational choice’, which infers that people are likely to select the option that will yield maximum benefit or utility when given a set of choices. In this context, for an individual i, making a choice j, utility (U) is described as a function of characteristics of the choices (Zj) and the characteristics of the people making the choice (Xi) and an error term denoting unobserved attributes of choices and individuals (eij).23

In this equation, F is assumed to be a linear function (hence this is also known as linear random utility model) and can be represented as:

where β and γ are the utilities (also known as parameter estimates) associated with the choice features and interaction of choice features and individual features, respectively.

DCE, being a stated preference technique, is useful in determining weighted preferences of various stakeholders, including patients, parents and clinicians in a healthcare decision-making situation, which is not possible via other common methods such as interviews, focus groups, and surveys.24 Additionally, they estimate monetary and non-monetary values, for example, willingness to pay (WTP), willingness to accept (WTA), and probability scores,21,25 which can be beneficial to include in a neurodevelopmental follow-up care as it may help in better understanding the approximate expenditure for future follow-up care.

Although the application of DCEs in fields of healthcare is relatively new, the use of DCEs in studies of neurodevelopmental follow-up care has recently increased. However, methodological limitations in experimental design and analysis methods, as well as sub-optimal reporting in publications have been observed in this field.26–29 In order to advance the quality and strength of conclusions that can be drawn from DCEs in this field, it is important to understand the methodological features that have been used, as well as identify potential neurodevelopmental care attributes appropriate for use in future preference studies. This systematic review aimed to examine how DCEs have been designed and delivered in studies for neurodevelopmental follow-up care of children, and to identify preferred attributes of these models of care for the purpose of informing neurodevelopmental follow-up care health service design.

Methods

This systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses report (PRISMA) guidelines.30 A protocol for this review was registered with PROSPERO (Record ID = CRD42022325685).

Data Sources, Search Strategy, and Screening

On 08 February 2022, we searched PubMed, Embase, CINAHL, and Scopus databases. The search strategy was developed using existing literature and expert assistance from researchers experienced with systematic reviews in this field. A combination of Medical Subject Heading (MeSH) terms and keywords were used for groups of terms related to “children”, “patient preference” and “neurodevelopmental follow-up care”. Search syntax and count of returned titles for each string for all databases are provided in Online Resource 1 (Tables S1S6). An advanced Google search was also conducted with the first 20 pages screened; however, no additional relevant papers were identified.

All records obtained through the searches were uploaded to EndNote version 2031 and deduplicated prior to screening. These records were uploaded to Rayyan,32 a web-based systematic review management platform, for screening. Two authors independently completed the title and abstract screening of all the records identified from the databases (PS and UH). Any disagreements were resolved by consulting a third author (SK). After the title and abstract screening, full-text publications were independently screened by another pair of reviewers (PS and SS) and disagreements were resolved by consulting SK.

Study Selection

Original research studies were included if (1) the population was children, infants or adolescents, (2) a DCE methodology was used, and (3) the context was neurodevelopmental follow-up care or programs for neurodevelopmental disorders (for example, ADHD, ASD, dyslexia, and cerebral palsy). Studies using any other preference methods including surveys and interviews were excluded. Publications were excluded if they were reviews, meta-analysis, case studies, case reports, conference abstracts, letters to editor, guidelines, and commentaries. Non-human studies were also excluded. No limits were applied on publication date or language.

Data Extraction

Two authors were involved in data extraction for included studies, one extracted data (PS) and the other checked the extracted data (SS) for all studies. Any unresolved disagreements were resolved by consulting a third author (SK). Missing data were reported for the final analysis and interpretation. All information was extracted into a Microsoft Excel Workbook.33

The following data were extracted: publication details (author name, year, country, title, and journal name); study details (aims/objectives, study design, dates of recruitment, primary and secondary outcome); neurodevelopmental disorder and children’s details (type of neurodevelopmental disorder/delay, inclusion criteria for children in the study, age, and sample size); respondent details (respondent type, age, sample size, response rate, and recruitment strategy); DCE methodological details (for example, attributes included for a model of care, experimental design, analysis method, and software used), results, conclusions, limitations, and the advantages and disadvantages of the studies.

Risk of Bias Assessment

We assessed risk of bias in the included studies with a widely used checklist initially created by Lancsar and Louviere in 2008,24 and modified by Mandeville in 2014.34 Domains include choice task design, experimental design, conduct, and analysis. Two reviewers (PS and LV) independently applied the checklist to each study and recorded judgements for each item. Any unresolved disagreements were resolved by consulting a third author (SK).

Data Analysis

The data were summarised descriptively and tabulated for each DCE study included in the review. A narrative synthesis was completed for demographic characteristics, attribute selection process, experimental design, analysis method, and the identified attributes.

Results

A total of 6618 papers were identified; after removing duplicates, 6361 remained. During screening, most studies were excluded as they were not performed in the context of neurodevelopment follow-up or were not focussed on children. From the 49 publications that progressed to full-text screening, 16 studies were included in the review.26–29,35–46 The screening and selection process is reported in PRISMA diagram in Figure 1 Preferred Reporting Items for Systematic Reviews and Meta-Analyses report.

Figure 1 Preferred Reporting Items for Systematic Reviews and Meta-Analyses report.

Characteristics of Included Studies

Table 1 summarises the important study characteristics of the included DCE publications. Studies were conducted from the years 2008 to 2022 mostly in the United States of America (n=5)28,29,36,41,42 and Canada (n=3)26,27,40 or both (n=2),44,45 followed by European countries (n=5),37–39,43,46 and Australia (n=1).35 ADHD was the most frequently studied neurodevelopmental disorder (n=7),37–39,42,44–46 followed by children’s mental health (CMH) (n=2)26,27 and ASD (n=1).29 Parents (n=9)27,37–40,42,44–46 or caregivers (n=4)28,35,36,41 were the main DCE respondent type. However, two studies also reported professionals (including child and youth worker, social workers, psychologist, psychiatrist, early childhood education providers, nurses)26,43 and one study reported teachers as respondents.29 The number of participants in studies ranged from 38 to 1194 with the median sample size being 241.

Table 1 Demographic Characteristics of the Included DCE Studies (n=16)

Methodological Characteristics of Included DCE Studies

Table 2 describes the attribute selection, choice set generation, and experimental designs used in each included DCE study.

Table 2 Characteristics of the Attribute Selection Process, Choice Set Generation, and Experimental Designs of the Included DCE Studies (n=16)

Attribute Selection Process

Authors used a variety of methods to select attributes including focus group discussions (n=8) and literature reviews (n=6) and sometimes a combination of these and other methods. Studies included two (n=8), three (n=7), or five (n=1) alternatives. The number of attributes included in studies ranged from 3 to 20 and the maximum levels included were 5. Attributes can either be quantitative (numerical) or qualitative (categorical). Usually, categorical variables need to be coded to use in utility functions.47 Most studies in this review lacked reporting on the method used to code categorical variables (n=11). Of those that did report this, two used dummy coding40,42 and two used effects coding.28,36 Only two studies reported utility specifications, which stated using alternative specific constant.35,43

Generating Choice Sets

In this review, there were more unlabelled (n=13) than labelled choice sets (n=3). Most studies used a forced choice approach (n=14), where respondents have no choice but to answer that choice set. Only one study used an opt-out approach,40 and one study did not report this information.38 The total number of choice sets in included studies ranged from 8 to 609 but the number of choice sets presented per participant ranged from 3 to 30. None of the studies explicitly stated whether they used a homogenous (respondents are shown the same choice sets) or heterogeneous (respondents are shown only a subset of choice sets by dividing into blocks or groups) design matrix. However, most studies did not block their designs (n=11); therefore, we assumed that they used a homogenous design matrix.

Experimental Design Strategy

In this review, all studies used fractional factorial design except one that did not state the type of experimental design.29 Orthogonal design (n=5) was a common design: one study performed an orthogonal array39 and one an orthogonal balance.36 Efficient designs were also identified (n=5), of which three were D-efficient designs and two were Bayesian D-efficient. Of the Bayesian D-efficient designs, one study used a Modified Federov algorithm and 500 Halton draws35 and the other used a Hierarchical Bayesian algorithm with 2000 draws (type of draws was not stated).40 Six studies did not report which fractional factorial category they used.

Study Type and DCE Administration

In this review, half of the studies conducted a pilot study, and the other half did not. One of the studies included in the review was itself a pilot study.36 Only two studies used priors in this review,35,41 while the others did not state using or not using them. Arora et al obtained prior values of +0.1 and −0.1 via existing literature35 and Tsai et al used zero valued priors.41 Arora et al also reported that their sample was obtained from a broader clinical trial while the remaining studies did not mention any information on sample size. Most DCE surveys were administered online or via email (n=9), one was paper based, few were both (n=5), and one was via both telephone and email.

Analysis Methods in Included DCE Studies

The analysis methods used in each DCE are presented in Table 3. Various models were used in the DCE studies (Figure 2 Types of analysis methods identified in the included studies [n = 16]). Most of the studies used a single model (n=13) and the rest used multiple models. Of those that used one model, mixed logit was common (n=6), followed by multinomial logit (MNL) (n=3), and random-effects logit (n=2). Studies with multiple models used mixed logit and latent class analysis (LCA) (n=2) and MNL and LCA (n=1). Two of the mixed logit model studies extended their model specification to generalised MNL (GMNL)35 and heteroskedastic MNL,41 which makes different assumptions about parameter distributions. For model estimation, only five studies have stated using Hierarchical Bayes estimation method.26,27,42,44,45 While it can be assumed that others have used maximum likelihood estimation, this was not reported explicitly. Of those studies that used LCA, classes were selected using log-likelihood, Akaike information criterion (AIC) or Bayesian information criterion (BIC). Preference heterogeneity was reported in 10 studies. Studies accounted for explained, unexplained, and scale heterogeneity. Only three studies performed monetary evaluations and reported willingness to pay (WTP) (n=2)37,40 and willingness to accept (WTA) (n=1).35

Table 3 Characteristics of the Analysis Methods and Most Preferred Attributes Identified in the Included DCE Studies (n=16)

Figure 2 Types of analysis methods identified in the included studies [n = 16].

Abbreviations: MNL, multinomial logit; LCA, latent class analysis.

Outcomes of Included DCE Studies: Preferred Attributes

In DCE studies, the attributes preferred by parents or caregivers and clinicians differed from each other. For parents and caregivers, the social, behavioral, and emotional situation of the child were important. Effectiveness and side effects of medication, receiving a genetic diagnosis, lesser waiting time, and spending less income were also identified as important attributes by parents. Individual and group parent training was highlighted as important by some parents. One study indicated that parents preferred treatments with behavior therapy and the desire to avoid medication.42 For clinicians, a child’s controlling behavior, ability to self-care, decision-making, and future communication and expression skills were important attributes. A study also found that professionals prefer active learning materials with parenting groups and therapist coaching calls.26 Table 3 highlights only the most important attributes (as a result of model outcome) in DCE studies. The range of attributes initially included by authors in all of the studies is presented in Online Resource 2 (Table S7).

Risk of Bias and Quality of Included Studies

An assessment of the reporting quality and risk of bias of included studies is provided in Table 4. There were high risks in the choice task design specially in the inclusion of an opt-out or status quo option or justification of forced choice. Not justifying why a forced choice was included in the choice set may obligate the participants to choose something they are not interested in. Moderate to high risks were also identified in the experimental design of the study which may hinder with the neurodevelopment follow-up care outcome as the choice tasks may be inadequately designed. The conduct of discrete choice experiments was typically of low risk with an exception in the ‘response rate sufficient to minimise response bias’ section. Studies either had a low response rate or did not report it at all. A high response rate may ensure better generalisability of outcomes. The analysis of discrete choice experiments was also typically of low risk. Econometric models selected were particularly appropriate for choice task design.

Table 4 Quality Assessment of the Included DCE Studies (n=16)

Discussion

The primary aim of this systematic review was to synthesize the methods and findings of DCE studies conducted in the context of neurodevelopmental follow-up for children. Our review identified that most studies had been conducted over the past decade. This aligns with a literature review in 2019 reporting a 20% increase in DCE use over the preceding 10 years.25 Most included studies focussed on ADHD and ASD which are the two most common neurodevelopmental disorders observed in childhood.1,48 Overall, the findings of this review suggested variability in the methods used by the studies and highlighted some important attributes from the perspective of parents and carers. Most importantly, the design of a DCE is crucial to the conduct and quality of the study, and it includes several steps that require attention to detail. Here, we discuss the key shortcomings of the DCE studies included in this review and suggest opportunities for improvement.

Most studies identified in this review were unlabelled. Even though the research question may dictate whether an experiment is labelled or unlabelled, evidence suggests that a participant’s choice may sometimes be influenced by labels when they are present.49,50 The utility function (described as the measurement of consumers’ preferences) for an unlabelled experiment is identical or generic, but for a labelled experiment the utility function should be specified.51 Two out of three labelled studies in this review used label-specific constants. The importance of using utility functions accurately has been highlighted in previous research as it is crucial to identify the relevant importance of an attribute over other, and in turn, better understand respondents’ behavioral responses and satisfaction with the overall model.21,51,52

This review reported four studies conducting a partial profile,26,27,44,45 which means they showed only a subset of attributes in each choice task due to large number of attributes. Evidence suggests that there is significant impact of the number of alternatives, attributes and levels shown to the respondents in a DCE on their behavioral responses.53–55 A smaller number of attributes reduce cognitive burden on respondents, although quantity may not be as important as the relevance of the attribute.56 Consequently, partial profiling may be beneficial in studies comprising a large number of attributes and is an important methodological consideration for future studies in this field.

Most studies did not report blocking their experimental design and are likely to have used a homogenous design. Prior evidence indicates that heterogeneous designs are advantageous because they provide more information and allow blocking that may reduce cognitive burden for respondents.57 However, a homogeneous design may be appropriate if few parameters are to be estimated.58 Seven studies in this review used a large number of choice sets.26,27,35,39,42,44,45 Although the number of choice sets is typically determined by the total number of parameters to be estimated in the choice model, there has been inconsistency in prior recommendations regarding the number needed. Hensher et al suggested using 4 to 16 choice tasks.59 This is in contrast to studies that have demonstrated the number of choice sets may have minimal impact on findings.60,61 Nonetheless, regardless of the methodological nuance required to address specific research questions, potential fatigue caused to the respondents remains a key consideration when designing studies in this field.

Design strategy was typically not clearly reported in the included studies. This limited our ability to draw strong conclusions regarding the most appropriate approach for conducting neurodevelopmental follow-up DCEs. On the one hand, efficient designs may be better in that they capture maximum information, calculate reliable parameter estimates with smaller sample sizes, and can be used if any attribute levels are unusual (unrealistic or impossible) and dominant.62–64 On the other hand, orthogonal designs may be better at encompassing the attribute space, and are less technically demanding to conduct.65 However, in the studies included in this review, capturing maximum and reliable information seemed to be prioritised in this context of investigating preferences for neurodevelopmental follow-up care.

Half of the studies identified in the review conducted pilot studies and only two studies reported information on priors. For the remaining studies, this is a notable limitation as there is no indication of accuracy regarding how the parameter values were obtained. Previous studies suggest that conducting a pilot study is important for generating feedback about the DCE and to obtain parameter priors so that a better design and impression of likely parameter estimates can be obtained for the main study.66,67 Moreover, if informative priors are estimated, this will assist with minimum required sample size calculations.66,68

The analysis methods used by DCEs in this review each have strengths and limitations. Mixed logit model was a common analysis method among studies in this review, followed by MNL. MNL (also known as conditional logit) has been used and modified by many researchers since its introduction in 1973.69 However, due to advancement in software packages and increasing demand, more variations and complexities were added in estimating models which likely contributed to mixed logit and latent class analysis models receiving more attention in the literature.70 Furthermore, MNL accounts for explained preference heterogeneity; and mixed logit, random-effects logit, and latent class models account for unexplained preference heterogeneity. Preference heterogeneity refers to different respondents having different preferences in the same choice scenario71 which a recent study suggests is important in health-related DCEs.71 Preference heterogeneity was reported in 10 studies in this review, two of which also reported scale heterogeneity (observed in scale parameters).72 This indicates that the authors acknowledged different behaviors may influence neurodevelopmental follow-up care.

Implications for Future Research

The nature of the research question as well as the quality and quantity of data are influential and interdependent considerations when determining the most appropriate experimental DCE design.21 This review has highlighted opportunity for methodological improvements at all stages of the design and analysis process to improve robustness of DCEs in this field. During the initial design stage of a DCE, careful consideration of the attribute selection process, generation of choice sets, and strategising a suitable experimental design among future studies in the field is likely to aid in optimising the accuracy and informativeness of findings. Similarly, appropriate prior selection and sample size calculation are key methodological features that should be purposefully devised. It is also important for future studies to consider response behaviors that may be related to unexplained heterogeneity that can be accounted for through appropriate analyses including mixed logit or latent class models. More monetary value calculations could help in building better health service models and offer stakeholders an understanding about an estimate expenditure for follow-up care.

Strengths and Limitations of the Review

Although this review considered a broad scope of literature with no date or language limitations, which may be considered a strength, the included publications required an appropriate level of methodological description to achieve the aim of this review. This meant that some publication types, including conference abstracts, case studies, and letters to editor, were considered unsuitable and excluded from the review and consequently findings from this review are not inclusive of information reported in other publication forms. While this was appropriate for addressing the intended study aims, it is noteworthy that among the included journal manuscripts unclear reporting still often contributed to uncertainty in our understanding of some important components of DCE methodology used. Due to the limited number of studies and heterogenous sample characteristics across diagnostic sub-groups, it was also not possible for separate analyses to be completed across sub-groups of neurodevelopmental disorders. Rather, the scope of the present review was limited to providing an overview of preferences for informing a healthcare service design perspective, that is not necessarily neurodevelopment diagnosis specific, regarding overall approaches to neurodevelopment-related follow-up care. Due to the nature of neurodevelopmental disorders and individual circumstances, differences in preferences may exist both among consumers within the same diagnosis category and between diagnostic groups. However, health system design internationally has not typically involved the establishment of different health systems and services for individual diagnostic groups, but rather services that provide neurodevelopmental follow-up are typically designed to provide services for a range of at-risk children with or without specific diagnoses at the time of entering the service. While we have tried to be inclusive of a range of diagnosis and disorders in the scope of searches for this study, the available literature was not inclusive of all diagnostic groups that may access services for neurodevelopmental follow-up care. Consequently, while findings addressed the study aims regarding the design and delivery of DCEs examining neurodevelopmental care of children and identified the preferred attributes that have been reported, the findings arising from studies included in this review should not be interpreted as representing the preferences of all consumers and families who may access neurodevelopmental follow-up care.

Conclusion

In summary, several aspects of DCEs were assessed in neurodevelopmental follow-up care. DCE is an appropriate preference method as it allows the estimation of weighted preferences, uncertainties, preference heterogeneity, the relevance of an attribute over another, and offers significant preference information beneficial to patient-centred care. The attributes identified from studies in this review may increase awareness of important components of future follow-up programs. Furthermore, when discussing management options, findings from this review may contribute to a broader understanding of potential areas for focussed discussions that may be important from the perspective of children who require neurodevelopmental support, along with the needs of their families.

Abbreviations

DCE, Discrete Choice Experiment; ADHD, Attention-deficit/hyperactive disorder; ASD, Autism Spectrum Disorder; WTP, Willingness to Pay; WTA, Willingness to Accept; PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses report; MeSH, Medical Subject Headings; MNL, Multinomial Logit; LCA, Latent Class Analysis; GMNL, Generalised Multinomial Logit; AIC, Akaike Information Criterion; BIC, Bayesian Information Criterion.

Key Points

Key attributes for a follow-up care for children needing neurodevelopmental support, along with the needs of their families were identified. A range of opportunities exist for improving robustness of experimental design of discrete choice experiments in this field.

Acknowledgement

The authors would like to acknowledge Associate Professor Sanjeewa Kularatna and Professor Steven M McPhail for their invaluable guidance and support as the senior authors of this paper.

Author Contributions

All authors made a significant contribution to the work reported, whether that is in the conception, study design, execution, acquisition of data, analysis and interpretation, or in all these areas; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; have agreed on the journal to which the article has been submitted; and agree to be accountable for all aspects of the work.

Funding

This work is part of a program of research funded through a Medical Research Future Fund (MRFF) Congenital Heart Disease Grant (ARGCHDG0035) 2020-2024. This funder did not have any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Disclosure

The authors report no conflicts of interest in this work.

References

1. Doernberg E, Hollander E. Neurodevelopmental disorders (asd and adhd): dsm-5, icd-10, and icd-11. CNS Spectr. 2016;21(4):295–299. doi:10.1017/S1092852916000262

2. Jeste SS. Neurodevelopmental behavioral and cognitive disorders. CONTINUUM. 2015;21(3):690–714. doi:10.1212/01.CON.0000466661.89908.3c

3. Boyle CA, Boulet S, Schieve LA, et al. Trends in the prevalence of developmental disabilities in US children, 1997–2008. Pediatrics. 2011;127(6):1034–1042. doi:10.1542/peds.2010-2989

4. Hu WF, Chahrour MH, Walsh CA. The diverse genetic landscape of neurodevelopmental disorders. Annu Rev Genomics Hum Genet. 2014;15(15):195–213. doi:10.1146/annurev-genom-090413-025600

5. Bhatt AB, Foster E, Kuehl K, et al. Congenital heart disease in the older adult: a scientific statement from the American Heart Association. Circulation. 2015;131(21):1884–1931. doi:10.1161/CIR.0000000000000204

6. Soto CB, Olude O, Hoffmann RG, et al. Implementation of a routine developmental follow‐up program for children with congenital heart disease: early results. Congenit Heart Dis. 2011;6(5):451–460. doi:10.1111/j.1747-0803.2011.00546.x

7. Marino BS. New concepts in predicting, evaluating, and managing neurodevelopmental outcomes in children with congenital heart disease. Curr Opin Pediatr. 2013;25(5):574–584. doi:10.1097/MOP.0b013e328365342e

8. Marino BS, Lipkin PH, Newburger JW, et al. Neurodevelopmental outcomes in children with congenital heart disease: evaluation and management: a scientific statement from the American Heart Association. Circulation. 2012;126(9):1143–1172. doi:10.1161/CIR.0b013e318265ee8a

9. Matsuzaki T, Matsui M, Ichida F, et al. Neurodevelopment in 1‐year‐old Japanese infants after congenital heart surgery. Pediatr Int. 2010;52(3):420–427. doi:10.1111/j.1442-200X.2009.02974.x

10. Majnemer A, Limperopoulos C, Shevell MI, Rohlicek C, Rosenblatt B, Tchervenkov C. A new look at outcomes of infants with congenital heart disease. Pediatr Neurol. 2009;40(3):197–204. doi:10.1016/j.pediatrneurol.2008.09.014

11. McGowan EC, Vohr BR. Neurodevelopmental follow-up of preterm infants: what is new? Pediatr Clin. 2019;66(2):509–523. doi:10.1016/j.pcl.2018.12.015

12. Salt A, Redshaw M. Neurodevelopmental follow-up after preterm birth: follow up after two years. Early Hum Dev. 2006;82(3):185–197. doi:10.1016/j.earlhumdev.2005.12.015

13. Ahmed R, Borst JM, Yong CW, Aslani P. Do parents of children with attention-deficit/hyperactivity disorder (ADHD) receive adequate information about the disorder and its treatments? A qualitative investigation. Patient Prefer Adherence. 2014;8:661. doi:10.2147/PPA.S60164

14. Angelo D, Jones S, Kokoska S. Family perspective on augmentative and alternative communication: families of young children. Augmentat Alternat Commun. 1995;11(3):193–202. doi:10.1080/07434619512331277319

15. Bailes AF, Gannotti M, Bellows DM, Shusterman M, Lyman J, Horn SD. Caregiver knowledge and preferences for gross motor function information in cerebral palsy. Dev Med Child Neurol. 2018;60(12):1264–1270. doi:10.1111/dmcn.13994

16. Davis CC, Claudius M, Palinkas LA, Wong JB, Leslie LK. Putting families in the center: family perspectives on decision making and ADHD and implications for ADHD care. J Atten Disord. 2012;16(8):675–684. doi:10.1177/1087054711413077

17. Kawarai S, Symon JB, Hernández A, Fryling MJ. Assessing preferences among behavioral interventions with Japanese Parents of Children With Developmental Disabilities. Child Fam Behav Ther. 2017;39(3):191–199. doi:10.1080/07317107.2017.1338450

18. Payne JW, Bettman JR, Johnson EJ. The use of multiple strategies in judgment and choice. Archiv Ophthalmol. 1993;111(2):194–196. doi:10.1001/archopht.1993.01090020048021

19. Phillips KA, Johnson FR, Maddala T. Measuring what people value: a comparison of “attitude” and “preference” surveys. Health Serv Res. 2002;37(6):1659–1679. doi:10.1111/1475-6773.01116

20. de Bekker‐Grob EW, Ryan M, Gerard K. Discrete choice experiments in health economics: a review of the literature. Health Econ. 2012;21(2):145–172. doi:10.1002/hec.1697

21. Lancsar E, Fiebig DG, Hole AR. Discrete choice experiments: a guide to model specification, estimation and software. Pharmacoeconomics. 2017;35(7):697–716. doi:10.1007/s40273-017-0506-4

22. Ghijben P, Lancsar E, Zavarsek S. Preferences for oral anticoagulants in atrial fibrillation: a best–best discrete choice experiment. Pharmacoeconomics. 2014;32(11):1115–1127. doi:10.1007/s40273-014-0188-0

23. Bruch EE, Mare RD. Methodological issues in the analysis of residential preferences, residential mobility, and neighborhood change. Sociol Methodol. 2012;42(1):103–154. doi:10.1177/0081175012444105

24. Lancsar E, Louviere J. Conducting discrete choice experiments to inform healthcare decision making. Pharmacoeconomics. 2008;26(8):661–677. doi:10.2165/00019053-200826080-00004

25. Soekhai V, Whichello C, Levitan B, et al. Methods for exploring and eliciting patient preferences in the medical product lifecycle: a literature review. Drug Discov Today. 2019;24(7):1324–1331. doi:10.1016/j.drudis.2019.05.001

26. Cunningham CE, Deal K, Rimas H, Chen Y, Buchanan DH, Sdao-Jarvie K. Providing information to parents of children with mental health problems: a discrete choice conjoint analysis of professional preferences. J Abnorm Child Psychol. 2009;37(8):1089–1102. doi:10.1007/s10802-009-9338-9

27. Cunningham CE, Deal K, Rimas H, et al. Modeling the information preferences of parents of children with mental health problems: a discrete choice conjoint experiment. J Abnorm Child Psychol. 2008;36(7):1123–1138. doi:10.1007/s10802-008-9238-4

28. Cross J, Yang J-C, Johnson FR, et al. Caregiver preferences for the treatment of males with fragile X syndrome. J Dev Behavl Pediatr. 2016;37(1):71–79. doi:10.1097/DBP.0000000000000234

29. Hugh ML, Johnson LD, Cook C. Preschool teachers’ selection of social communication interventions for children with autism: an application of the theory of planned behavior. Autism. 2022;26(1):188–200. doi:10.1177/13623613211024795

30. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Rev. 2021;10(1):1–11. doi:10.1186/s13643-021-01626-4

31. EndNote. EndNote. Available from: https://endnote.com/. Accessed December 12, 2022.

32. Mourad Ouzzani HH, Fedorowicz Z, Elmagarmid A. Rayyan — a web and mobile app for systematic reviews. Available from: https://www.rayyan.ai/. Accessed December 12, 2022.

33. Excel MCM Microsoft Corporation Microsoft Excel. Available from: https://office.microsoft.com/excel. Accessed December 12, 2022.

34. Mandeville KL, Lagarde M, Hanson K. The use of discrete choice experiments to inform health workforce policy: a systematic review. BMC Health Serv Res. 2014;14(1):1–14. doi:10.1186/1472-6963-14-367

35. Arora S, Goodall S, Viney R, Einfeld S. Using discrete-choice experiment methods to estimate the value of informal care: the case of children with intellectual disability. Pharmacoeconomics. 2019;37(4):501–511. doi:10.1007/s40273-018-0637-2

36. dosReis S, Ng X, Frosch E, Reeves G, Cunningham C, Bridges JF. Using best–worst scaling to measure caregiver preferences for managing their child’s ADHD: a pilot study. Patient. 2015;8(5):423–431. doi:10.1007/s40271-014-0098-4

37. Glenngård AH, Hjelmgren J, Thomsen PH, Tvedten T. Patient preferences and willingness-to-pay for ADHD treatment with stimulants using discrete choice experiment (DCE) in Sweden, Denmark and Norway. Nord J Psychiatry. 2013;67(5):351–359. doi:10.3109/08039488.2012.748825

38. Mühlbacher AC, Rudolph I, Lincke H-J, Nübling M. Preferences for treatment of attention-deficit/hyperactivity disorder (ADHD): a discrete choice experiment. BMC Health Serv Res. 2009;9(1):1–10. doi:10.1186/1472-6963-9-149

39. Nafees B, Setyawan J, Lloyd A, et al. Parent preferences regarding stimulant therapies for ADHD: a comparison across six European countries. Eur Child Adolesc Psychiatry. 2014;23(12):1189–1200. doi:10.1007/s00787-013-0515-6

40. Regier D, Friedman J, Makela N, Ryan M, Marra C. Valuing the benefit of diagnostic testing for genetic causes of idiopathic developmental disability: willingness to pay from families of affected children. Clin Genet. 2009;75(6):514–521. doi:10.1111/j.1399-0004.2009.01193.x

41. Tsai J-H, Crossnohere NL, Strong T, Bridges JF. Measuring meaningful benefit-risk tradeoffs to promote patient-focused drug development in Prader-Willi syndrome: a discrete-choice experiment. MDM Policy Pract. 2021;6(2):23814683211039457. doi:10.1177/23814683211039457

42. Waschbusch DA, Cunningham CE, Pelham WE Jr, et al. A discrete choice conjoint experiment to evaluate parent preferences for treatment of young, medication naive children with ADHD. J Clin Child Adoles Psychol. 2011;40(4):546–561. doi:10.1080/15374416.2011.581617

43. Webb EJ, Lynch Y, Meads D, et al. Finding the best fit: examining the decision-making of augmentative and alternative communication professionals in the UK using a discrete choice experiment. BMJ open. 2019;9(11):e030274. doi:10.1136/bmjopen-2019-030274

44. Wymbs FA, Chen Y, Rimas HM, Deal K, Waschbusch DA, Pelham WE. Examining parents’ preferences for group parent training for ADHD when individual parent training is unavailable. J Child Fam Stud. 2017;26(3):888–904. doi:10.1007/s10826-016-0588-1

45. Wymbs FA, Cunningham CE, Chen Y, et al. Examining parents’ preferences for group and individual parent training for children with ADHD symptoms. J Clin Child Adoles Psychol. 2016;45(5):614–631. doi:10.1080/15374416.2015.1004678

46. Fegert JM, Slawik L, Wermelskirchen D, Nuebling M, Muehlbacher A. Assessment of parents’ preferences for the treatment of school-age children with ADHD: a discrete choice experiment. Expert Rev Pharmacoecon Outcomes Res. 2011;11(3):245–252. doi:10.1586/erp.11.22

47. Bliemer MC, Rose JM. Designing Stated Choice Experiments: State-of-The-Art. Kyoto University; 2006:1–35.

48. American Psychiatric Association D, Association AP. Diagnostic and Statistical Manual of Mental Disorders: DSM-5. Vol. 5. Washington, DC: American psychiatric association; 2013.

49. Bialkova S, Grunert KG, Juhl HJ, Wasowicz-Kirylo G, Stysko-Kunkowska M, van Trijp HC. Attention mediates the effect of nutrition label information on consumers’ choice. Evidence from a choice experiment involving eye-tracking. Appetite. 2014;76:66–75. doi:10.1016/j.appet.2013.11.021

50. Shen J, Saijo T. Does an energy efficiency label alter consumers’ purchasing decisions? A latent class approach based on a stated choice experiment in Shanghai. J Environ Manage. 2009;90(11):3561–3573. doi:10.1016/j.jenvman.2009.06.010

51. Van Der Pol M, Currie G, Kromm S, Ryan M. Specification of the utility function in discrete choice experiments. Value Health. 2014;17(2):297–301. doi:10.1016/j.jval.2013.11.009

52. Torres C, Hanley N, Riera A. How wrong can you be? Implications of incorrect utility function specification for welfare measurement in choice experiments. J Environ Econ Manage. 2011;62(1):111–121. doi:10.1016/j.jeem.2010.11.007

53. Caussade S, de Dios Ortúzar J, Rizzi LI, Hensher DA. Assessing the influence of design dimensions on stated choice experiment estimates. Transp Res B. 2005;39(7):621–640. doi:10.1016/j.trb.2004.07.006

54. DeShazo J, Fermo G. Designing choice sets for stated preference methods: the effects of complexity on choice consistency. J Environ Econ Manage. 2002;44(1):123–143. doi:10.1006/jeem.2001.1199

55. Hensher DA. Identifying the influence of stated choice design dimensionality on willingness to pay for travel time savings. J Trans Econom Policy. 2004;38(3):425–446.

56. Chrzan K. Using partial profile choice experiments to handle large numbers of attributes. Int J Mark Res. 2010;52(6):827–840. doi:10.2501/S1470785310201673

57. Sándor Z, Wedel M. Heterogeneous conjoint choice designs. J Market Res. 2005;42(2):210–218. doi:10.1509/jmkr.42.2.210.62285

58. Kessels R. Homogeneous versus heterogeneous designs for stated choice experiments: ain’t homogeneous designs all bad? J Choice Model. 2016;21:2–9. doi:10.1016/j.jocm.2016.08.001

59. Hensher DA, Stopher PR, Louviere JJ. An exploratory analysis of the effect of numbers of choice sets in designed choice experiments: an airline choice application. J Air Trans Manage. 2001;7(6):373–379. doi:10.1016/S0969-6997(01)00031-X

60. Bech M, Kjaer T, Lauridsen J. Does the number of choice sets matter? Results from a web survey applying a discrete choice experiment. Health Econ. 2011;20(3):273–286. doi:10.1002/hec.1587

61. Rose JM, Hensher DA, Caussade S, de Dios Ortúzar J, Jou R-C. Identifying differences in willingness to pay due to dimensionality in stated choice experiments: a cross country analysis. J Transp Geogr. 2009;17(1):21–29. doi:10.1016/j.jtrangeo.2008.05.001

62. Louviere JJ, Street D, Burgess L, Wasi N, Islam T, Marley AA. Modeling the choices of individual decision-makers by combining efficient choice experiment designs with extra preference information. J Choice Model. 2008;1(1):128–164. doi:10.1016/S1755-5345(13)70025-3

63. Collins AT, Bliemer MC, Rose JM. Constrained stated choice experimental designs. 2014.

64. Bliemer MC, Collins AT. On determining priors for the generation of efficient stated choice experimental designs. J Choice Model. 2016;21:10–14. doi:10.1016/j.jocm.2016.03.001

65. Street DJ, Burgess L, Louviere JJ. Quick and easy choice sets: constructing optimal and nearly optimal stated choice experiments. Int J Res Market. 2005;22(4):459–470. doi:10.1016/j.ijresmar.2005.09.003

66. de Bekker-Grob EW, Donkers B, Jonker MF, Stolk EA. Sample size requirements for discrete-choice experiments in healthcare: a practical guide. Patient. 2015;8(5):373–384. doi:10.1007/s40271-015-0118-z

67. Barthold D, Brah AT, Graham SM, Simoni JM, Hauber B. Improvements to survey design from pilot testing a discrete-choice experiment of the preferences of persons living with HIV for long-acting antiretroviral therapies. Patient. 2022;15(5):513–520. doi:10.1007/s40271-022-00581-z

68. Rose JM, Bliemer MC. Sample size requirements for stated choice experiments. Transportation. 2013;40(5):1021–1041. doi:10.1007/s11116-013-9451-z

69. McFadden D. Conditional logit analysis of qualitative choice behavior. 1973.

70. Hensher DA, Greene WH. The mixed logit model: the state of practice. Transportation. 2003;30(2):133–176. doi:10.1023/A:1022558715350

71. Vass C, Boeri M, Karim S, et al. Accounting for preference heterogeneity in discrete-choice experiments: an ISPOR special interest group report. Value Health. 2022;25(5):685–694. doi:10.1016/j.jval.2022.01.012

72. Karim S, Craig BM, Vass C, Groothuis-Oudshoorn CG. Current practices for accounting for preference heterogeneity in health-related discrete choice experiments: a systematic review. PharmacoEconomics. 2022;40(10):1–14.

Creative Commons License © 2023 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.