Back to Journals » ClinicoEconomics and Outcomes Research » Volume 8

Methods to construct a step-by-step beginner’s guide to decision analytic cost-effectiveness modeling

Authors Rautenberg T, Hulme C, Edlin R

Received 25 May 2016

Accepted for publication 12 July 2016

Published 11 October 2016 Volume 2016:8 Pages 573—581

DOI https://doi.org/10.2147/CEOR.S113569

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3

Editor who approved publication: Professor Giorgio Colombo



Tamlyn Rautenberg,1 Claire Hulme,2 Richard Edlin,3

1Health Economics and HIV/AIDS Research Division (HEARD), University of Kwazulu Natal, KwaZulu Natal, South Africa; 2Leeds Institute of Health Sciences (LIHS), Academic Unit of Health Economics (AUHE), University of Leeds, West Yorkshire, United Kingdom; 3Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand

Background: Although guidance on good research practice in health economic modeling is widely available, there is still a need for a simpler instructive resource which could guide a beginner modeler alongside modeling for the first time.
Aim: To develop a beginner’s guide to be used as a handheld guide contemporaneous to the model development process.
Methods: A systematic review of best practice guidelines was used to construct a framework of steps undertaken during the model development process. Focused methods review supplemented this framework. Consensus was obtained among a group of model developers to review and finalize the content of the preliminary beginner’s guide. The final beginner’s guide was used to develop cost-effectiveness models.
Results: Thirty-two best practice guidelines were data extracted, synthesized, and critically evaluated to identify steps for model development, which formed a framework for the beginner’s guide. Within five phases of model development, eight broad submethods were identified and 19 methodological reviews were conducted to develop the content of the draft beginner’s guide. Two rounds of consensus agreement were undertaken to reach agreement on the final beginner’s guide. To assess fitness for purpose (ease of use and completeness), models were developed independently and by the researcher using the beginner’s guide.
Conclusion: A combination of systematic review, methods reviews, consensus agreement, and validation was used to construct a step-by-step beginner’s guide for developing decision analytical cost-effectiveness models. The final beginner’s guide is a step-by-step resource to accompany the model development process from understanding the problem to be modeled, model conceptualization, model implementation, and model checking through to reporting of the model results.

Keywords:
step-by-step guide, modeling, cost-effectiveness analysis, decision analysis, economic evaluation

Introduction

In countries where health technology assessment mechanisms are well established, decision analytical cost-effectiveness models (subsequently referred to as models) play a pivotal role in addressing difficult health-care decisions. Developing models is a complex process – it requires numerous steps and different skills are required to complete each step in a way that adheres to best practice in modeling.

In 2012, the International Society for Pharmacoeconomic and Outcomes Research (ISPOR) and the Society for Medical Decision Making published an update of the 2003 recommendations for best practices in modeling.17 These publications set out the “gold standard” of modeling practice; however, by their own admission they are “not intended as primers on their subjects”1 and may not be well understood by those embarking on modeling for the first time.1,8 Few resources are available to accompany the process of model development. Chilcott et al suggested that “although checklists and good modelling practice have been developed, these perhaps indicate a general destination of travel without specifying how to get there.”8 This research directly addresses this gap by constructing a beginner’s guide (BG) to modeling to be used by model developers contemporaneous to model building. The objective is to construct a resource that is more basic than the ISPOR guidelines to improve the ease and accuracy by which a new modeler learns to model. For example, let us presume that a new researcher embarking on practical modeling for the first time consults the ISPOR guidelines and begins conceptualizing the model. The details of model conceptualization set out by Roberts et al and their supplementary material are very comprehensive in their guidance as to what to do and they include how to go about doing it.2 However, after conceptualization,2 readers are guided according to the type of model they will develop (State Transition Modelling,4 Discrete Event Simulation,3 or Dynamic Transmission Modeling5) and it is difficult for novice modelers to fill this gap of selecting an appropriate model structure. What would be useful is a resource that bridges this gap and potentially guides the modeler through something more aligned to the algorithm set out by Barton et al9 or the taxonomy of Brennan Chick and Davies.10 Therefore, a valuable resource would be one which guides the novice modeler through each step in the model development process and is a more elementary and supportive resource. What is required is a hybrid of best practice recommendations and a primer-style instructive text to enable early modelers to quickly achieve standards of modeling aspired to by ISPOR.

Aim

The aim of this research is to develop a BG to support model developers alongside the development of decision analytical cost-effectiveness models.

Methods

Four phases of research were undertaken to develop the BG, which are shown in Table 1 and described below.

Table 1 Methods used to develop the beginner’s guide (BG) to decision analytic cost-effectiveness modeling

It was necessary to develop the BG incorporating two components: the first, an exhaustive list of steps involved in the model development process, and the second, an “instructive statement” attached to each step setting out any guidance, considerations, or recommendations related to carrying out that step. For example, one step could be to define the perspective of the model analysis and the instructive statement could potentially read something like: “consider a payer perspective when [...]” and so on. To achieve this level of detail, it was necessary to first do a systematic review of guidelines (Phase I) to derive the list of all possible steps, and then a methods review (Phase II) to develop the instructive component of the BG for all steps, as described below.

In Phase I, a systematic review of best practice guidelines was performed to identify all steps potentially undertaken during model development. Best practice guidelines were defined as publications intended for the purpose of improving the methods and quality of models published by health economics experts or organizations, such as ISPOR. Through a process of data extraction, deductive reasoning, and guideline synthesis, an exhaustive list of steps used in developing a model was constructed. Full details of the systematic review are provided in the Supplementary materials.

For Phase II, the steps identified in Phase I were grouped into submethods and focused literature searches were performed to identify methods literature for each submethod. For example, when developing a model, it is necessary to identify resources and value those resources. A focused review was performed to identify best practice in identifying and valuing resources in order to include in the BG “how” to go about doing these two steps. Review of methods papers informed the “how to” part of the BG. Full details of the focused searches are provided in the Supplementary materials.

In Phase III, the preliminary BG was subjected to a consensus approach to evaluate whether it was current (included all information aligned with current methodological thinking in modeling), complete (covered all relevant aspects of model development), and clear (logical and unambiguous). A modified nominal group technique consensus method11 was used combining private (electronic) feedback in round 1 and interactive (face-to-face) feedback in a structured format with a facilitated meeting in round 2.

In round 1 of the consensus phase, the BG was distributed electronically via email to participants for familiarization, review, and feedback. For each model development step, participants were asked whether the step should be included, excluded, or reworded. Where ≥75% of respondents supported inclusion or exclusion of the step, this was considered final and not taken forward to the second consensus phase. Where no consensus was reached or ≥50% of the experts suggested rewording, the issue was instead addressed in round 2. In this way, the face-to-face time with experts was prioritized toward dealing with issues in which broad consensus did not already exist.

In Phase IV, two independent novice modelers at the University of Leeds developed a model using the guide. They evaluated the guide with respect to whether they understood the steps, found them useful, and performed them at that time point during the model-building process. In addition, the researcher redeveloped a case study model consisting of a five-state Markov model and recorded each step in the model-building process to evaluate the completeness of the guide.

Results

The BG is designed to be used by a beginner modeler alongside the model-building process and it is therefore necessary to structure the content in alignment with the model development process. Current literature divides the modeling process into a number of phases. The first phase addresses the question of what is to be modeled and looks at understanding the real-world decision problem and its context.8,1214 Models, by their nature, simplify a complex system into one that can be captured using less complex mathematical methods. The second phase requires a model developer to consider how the real-life decision problem translates into a model. During this phase, there is a tension between what Sonnenberg describes as the “theoretical model,” which represents an understanding of the natural history/biological fact, and the “practical model,” which is the “most detailed model which can be constructed given the limitations of the available data and the need for the model to be understood.”14 A third phase involves the actual programming of the model,8,12,14 including obtaining the data to be used in the model.15 In the fourth phase, the different strategies are evaluated13 by considering likely outcomes, validation of these outcomes, and sensitivity analysis.8,15 The final phase involves engaging with the decision-makers8,13 and disseminating the model results along with uncertainty of the model. These model phases are iterative, and it is common for developers to go back and forth between different phases of model development.8

The phases of model development are summarized in Table 2.

Table 2 Consolidated summary of descriptions of the model development process


Note: The numbering in this table (read vertically within each column) corresponds to the order of the task described by the original (referenced) author.


Abbreviation: RCT, randomized controlled trial; PSA, probabilistic sensitivity analysis.

Within each model development phase, multiple steps are undertaken. The number and order of these steps may differ across cases and are influenced by factors, including the type of model being developed, modeler’s preference, and data available to populate the model (eg, if using primary data from a randomized controlled trial, then the literature search strategy will differ versus if secondary data are being used to populate the model). The BG is structured according to the five phases of model development set out by Chilcott et al.8

Phase I – systematic review

Thirty-two best practice guidelines were data extracted, synthesized, and critically evaluated to identify steps taken and submethods used in model development. For a list of guidelines, please see Supplementary materials. A total of 148 steps involved in model development were identified and arranged into eight broad submethods, namely, evidence, model structure, resource valuation, effectiveness, uncertainty, validity, reporting, and general.

Phase II – methodological reviews

Developing a model involves many so-called submethods. For example, the steps relating to the evidence used in a model may be grouped together in an “evidence” submethod, which covers literature searching and review, evidence selection, and evidence grading. Similarly, separate submethods can be described for aspects, such as measuring and valuing health-related quality of life, characterizing uncertainty, and testing the validity of models. Nineteen submethod reviews were undertaken to identify and review literature relating to the eight submethods identified in Phase I. A summary of the literature reviewed for each submethod is shown in Table 3 and described below.

Table 3 Scope of review of eight submethods to inform the content of the beginner’s guide

The evidence submethod included the steps involved in literature searching and review, literature selection, evidence grading, and selecting input parameters for a model. Five publications were reviewed to explore search methods and the use of evidence in models,16,17 evaluate methods for selecting evidence based on quality and other criteria, respectively,18,19 and discuss methodological challenges when using evidence for modeling.20

The model structure submethod was defined as processes (eg, algorithms) and methods to select the most appropriate model structure. Three papers were reviewed which provide a taxonomy of model structures or guidance on choosing between them.9,10,21

The resource valuation submethod was defined as the methods used for identifying and quantifying relevant resources, assigning costs to resources, and the discounting of costs. Eight papers were relevant for the submethod dealing with resource valuation, including those dealing with quantifying resources,22 valuing resources,23,24 and approaches for discounting.2529

The effectiveness submethod was defined as having two components. The first being capturing the measurement of clinical benefits both with respect to efficacy data from randomized controlled trials, and effectiveness data from other studies, such as observational studies and registries. The second component focuses on health-related quality of life with emphasis on methods to describe (measure) health-related quality of life, such as, disease-specific, disease- and symptom-specific, and generic measures (eg, Euro-QOL EQ-5D, SF-6D, Health Utilities Index). This component also includes methods to value health-related quality of life (standard gamble, time trade off, rating scales); means of eliciting preferences (patient, medical experts, general population); and discounting outcomes. Relevant guidelines here include an overview of general issues regarding effectiveness,3032 adverse events,33 quality of life measurement,34 and methods used to incorporate quality of life into models.35

The uncertainty submethod was defined as types of uncertainty and methods to characterize uncertainty. A total of 12 papers were reviewed for uncertainty, including several that contribute to an understanding of terminology,15,3640 methods to address uncertainty in general,41,42 and more focused guidelines concentrating on parameter uncertainty43,44 and structural uncertainty.45,46

Ten key publications formed the basis of the review of validity, which covered types of validity and validation methods.8,12,4752 Schlesinger sets out recommended terms to describe model credibility,47 while Sargent defines and outlines methods of validation and verification.12 Other papers included a framework for assessing validity in models48 or applied broad issues in validation using specific models.49,50 In several other papers, validity is considered even though the focus of the papers appears to lie elsewhere.8,51,52

Model reporting was defined as any numeric or graphical results of a model, which included incremental cost-effectiveness ratio and confidence intervals; cost-effectiveness plane; cost-effectiveness acceptability curve; cost-effectiveness acceptability frontier; net benefit approach; and value of information analysis. For reporting of models, in six papers the graphical presentation of uncertainty from probabilistic sensitivity analyses is considered.15,5357 Several older papers compare methods of presenting uncertainty around incremental cost-effectiveness ratios,36,5861 with more recent papers instead considering the use of the net benefit approach and value of information analysis.38,42 Several other papers did not fit neatly into the defined submethods and were included in a general section which covered topics, such as methods to achieve transparency,62 selecting time horizons,63 and subgroup analysis.40

As a result of the submethods literature above, each of the 148 steps involved in model development were expanded upon to take the form of an instructive or directive statement/step along with explanatory notes, examples, and relevant references for each of the steps. This submethod review thereby informed the content of the BG, which was subjected to consensus agreement.

Phase III – consensus agreement

For the consensus agreement phase, purposive sampling was used to identify experts from across the UK. A total of 22 experts were originally contacted, of whom 18 responded. Of these, six experts contributed in round 1 (with the others either not replying at all or not replying in time to allow their responses to be incorporated) and 12 contributed in round 2. A total of six experts participated in both rounds.

For each of the 148 model development steps, participants were asked whether the step should be included, excluded, or reworded. Included as such were 133 steps (90%) following round 1 and four steps (3%) were excluded at round 1. This left ten steps, which were discussed at round 2, in addition to four new steps proposed for discussion after feedback in round 1.

The final panel invited for round 2 comprised nine academics, one industry participant, and two from contract research organizations. Five were experts in health economic modeling, two in guideline development, and one each in health technology assessment, literature searching, utilities, uncertainty, and statistics, respectively.

A total of ten experts attended round 2, with their responses digitally recorded and transcribed with the attendees’ permission. The number of steps increased from 148 to 156 after the consensus phase. Please see Supplementary materials for the final list of 156 steps.

Phase IV – validation

Two researchers provided feedback on three main aspects of the guide: whether they understood each of the 156 steps, the usefulness of the steps (1= not useful; 2= useful; 3= very useful), and whether they undertook the steps during that particular phase of model development. The results are summarized in Table 4.

Table 4 Validators’ feedback on understanding, usefulness, and timing of the steps in the beginner’s guide


Note: *Mean entries have been inserted for literature search, source, selection, and evidence grading steps, which were rated as done at that time point by both validators and useful by V1 and very useful by V2.


Abbreviations: V1, validator 1; V2, validator 2.

In summary, there seemed to be a good understanding of the content of the BG; it was useful for the model development process and most of the tasks were timed to coincide with the flow of the guide. In response to the feedback from Phase IV, a more detailed set of user instructions was compiled and links to the relevant literature resources were included in the BG.

In a second validation step, one of the authors (TR) redeveloped a model developed at the Centre for Health Economics, Technology Assessment Group at the University of York.64 Independent of the BG, each step of the model was undertaken and recorded to complete the development of the model according to the finished product. This recorded list was then cross-checked with the BG to determine whether this step was included in the BG and where it was not included, to explain why. The original and redeveloped model was a decision tree in Microsoft Excel (Microsoft Corporation, Redmond, WA, USA), which evaluated responders and nonresponders over lifetime duration, and included mortality. The model adopted the perspective of the National Health Service and Personal Social Services and the model output was cost per quality adjusted life year. Health effects were measured as quality adjusted life years and both costs and outcomes were discounted at 3.5%. Probabilistic and deterministic analyses were undertaken.

Limitations

The BG needs to be evaluated with respect to the limitations of the research. Firstly, the foundation for the BG is two literature reviews, one focusing on guidelines and one on submethods. The guideline review informed the draft of the BG consisting of the steps undertaken during model development and was updated in 2011. The steps were agreed during consensus development and verified during Phase VI. On this basis, an updated review of guidelines is not anticipated to change the steps in the BG substantially. However, the submethod reviews which informed the instructive component of each step will need to be updated annually as the empirical modeling methods are evolving in the discipline of health economics. The number of experts included in the consensus phase was constrained by geographical limitations and research funding; however, all experts participated voluntarily and none had any other participation in the research. There is a potential to evaluate the BG in a wider audience and for a range of model types. Moving forward, it would be valuable for more experienced modelers to also provide feedback on the BG. As the paper-based version of the BG was developed, it has evolved into an interactive web tool, which may be used to collect user feedback. Areas of paucity in the BG reflect the variation in the current literature, for example, limited guidance on the selection of clinical efficacy and effectiveness data as model inputs.20,33,65

No BG of the type reported here can stipulate the “correct” methods to be used, as this is likely to differ over time (with methodological and computational advances) and because the choice of appropriate methods is affected by the context of the decision to be made. However, in using the BG model developers should be confident that they have satisfactorily considered, suitably chosen, and can justify the submethods used for model development. Chilcott et al recognized a need for an aid to achieve best practice and considered this “a priority for future development.”8 The BG is a sound resource to fill that gap. Irrespective of these limitations, the current format of the BG will be of value to beginner modelers.

Discussion

The BG is intended to bridge the gap between theory and practical model development. The BG is intended as a complement to, rather than a replacement of, the ISPOR guidelines and this topic is the subject of another paper. In summary, there are three main distinguishing features of the BG. Firstly, the ISPOR guidelines set out a set of recommendations while the BG sets out a comprehensive list of steps to be considered and, if applicable, undertaken contemporaneous to model development. Secondly, the ISPOR guidelines are arranged according to model conceptualization, three specific modeling techniques, uncertainty, and validation, whereas the BG is arranged according to the five phases of model development. Thirdly, the BG takes a novel approach to integrating the concepts of uncertainty and validation into each step of model development rather than discrete concepts to be addressed separate to, or upon completion of, model development. Arguably, if model developers are made aware that a certain step in the model-building process may contribute to greater or lesser uncertainty in the model structure/results, then they are better informed to make adequate judgments as to how to minimize the uncertainty introduced during this step and also to consider the impact of potential uncertainty within and around the model results. Similarly when using the BG, a modeler is made aware of the impact of each step on model validity and is therefore better able to maximize the validity of the model. For example, during model conceptualization the modeler is made aware of the importance of consulting clinical experts to verify the face validity of the model – in this way, the modeler is aware of how this step potentially influences the face validity of the model. Each step is also, where relevant, linked to bias and heterogeneity throughout. For example, when selecting the comparator, selecting the relevant comparator influences the face validity of the model and if incorrectly chosen may introduce bias into the model (eg, if a costly comparator is selected). The BG condenses relevant aspects of model development into a single, accessible resource to inform modelers about the methodology of model development while they are going about developing a model. Where it is able to, it provides direct guidance; otherwise, it lists relevant references which describe and discuss potential methods. It also includes ancillary resources, for example, a quick reference section to evidence selection, detailed information on types of uncertainty and validity, and the methods used to address both.

The BG is potentially valuable for users of models, both by increasing the quality of what it produced and highlighting any deficits in documentation. In this way, it increases the transparency of the model development process and alerts users to potential sources of bias. Disaggregating the process into its smallest steps makes the process explicit and clear and the weaknesses of the process can be better perceived.

Conclusion

A BG has been developed based on four research methods. It has demonstrated usability in the model development process. Research is ongoing; however, the BG has the potential to be used in the operationalization of best practice recommendations in modeling.

Acknowledgments

The authors thank Christopher McCabe for his supervision and mentorship; Judy Wright who assisted in literature searches; Jane Allen who provided telephonic guidance on the application of nominal group technique; Roberta Longo, Peter Hall, and Chantelle Browne who piloted the draft guide prior to consensus; Colin Green and Suzy Paisley who provided electronic feedback for round one of the consensus phase; Roberta Longo and Charlotte Kelly who independently validated the guide; and Claire McKenna and the CHE at York University for providing the case study model to be used for researcher validation and for Claire’s availability to assist with questions. The authors also thank the following participants of round 1 and round 2 of the consensus panel meeting: Pelham Barton, John Brazier, Alan Brennan, Elizabeth Fenwick, Adam Lloyd, Richard Nixon, Zoe Philips, Mark Sculpher, Luke Vale, and Evelina Zimovetz. The authors further thank all the authors of all the papers which have been reviewed as part of this research, in acknowledgment that this beginner’s guide develops on their work. This manuscript was prepared as part of a PhD research fellowship. The research described was undertaken as a self-funded PhD.

Disclosure

The manuscript was prepared as part of a postdoctoral research fellowship. The research described was undertaken as a self-funded PhD. The authors report no other conflicts of interest in this work.

References 

1.

Caro JJ, Briggs AH, Siebert U, Kuntz KM. Modeling good research practices – overview: a report of the ISPOR-SMDM modeling good research practices task force – 1. Value Health. 2012;15(6):796–803.

2.

Roberts M, Russell LB, Paltiel AD, Chambers M, McEwan P, Krahn M. Conceptualizing a model: a report of the ISPOR-SMDM modeling good research practices task force – 2. Value Health. 2012;15(6):804–811.

3.

Karnon J, Stahl J, Brennan A, Caro JJ, Mar J, Moller J. Modeling using discrete event simulation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force – 4. Value Health. 2012;15(6):821–827.

4.

Siebert U, Alagoz O, Bayoumi AM, Jahn B, Owens DK, Cohen DJ, Kuntz KM. State-transition modeling: a report of the ISPOR-SMDM modeling good research practices task force – 3. Value Health. 2012;15(6):812–820.

5.

Pitman R, Fisman D, Zaric GS, Postma M, Kretzschmar M, Edmunds J, Brisson M. Dynamic transmission modeling: a report of the ISPOR-SMDM modeling good research practices task force – 5. Value Health. 2012;15(6):828–834.

6.

Briggs AH, Weinstein MC, Fenwick EA, Karnon J, Sculpher MJ, Paltiel AD. Model parameter estimation and uncertainty: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force – 6. Value Health. 2012;15(6):835–842.

7.

Eddy DM, Hollingworth W, Caro JJ, Tsevat J, McDonald KM, Wong JB. Model transparency and validation: a report of the ISPOR-SMDM modeling good research practices task force – 7. Value Health. 2012;15(6):843–850.

8.

Chilcott J, Tappenden P, Rawdin A, et al. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review. Health Technol Assess. 2010;14(25):iii–xii, 1–107.

9.

Barton P, Bryan S, Robinson S. Modelling in the economic evaluation of health care: selecting the appropriate approach. J.Health Serv Res Policy. 2004;9(2):110–118.

10.

Brennan A, Chick SE, Davies R. A taxonomy of model structures for economic evaluation of health technologies. Health Econ. 2006;15(12):1295–1310.

11.

Murphy MK, Black NA, Lamping DL, McKee CM, Sanderson CF, Askham J, Marteau T. Consensus development methods, and their use in clinical guideline development. Health Technol Assess. 1998;2(3):i–iv, 1–88.

12.

Sargent R. Verification and validation of simulation models. In: Proceedings of the 2010 Winter Simulation Conference. 5–8 December, 2010, Baltimore, Maryland, USA. Available from: http://student.telum.ru/images/6/66/Sargent_VV_2010.pdf. Accessed January 15, 2011.

13.

Stahl JE. Modelling methods for pharmacoeconomics and health technology assessment: an overview and guide. Pharmacoeconomics. 2008;26(2):131–148.

14.

Sonnenberg FA, Roberts MS, Tsevat J, Wong JB, Barry M, Kent DL. Toward a peer review process for medical decision analysis models. Med Care. 1994;32(7 Suppl):JS52–JS64.

15.

Briggs AH. Handling uncertainty in cost-effectiveness models. Pharmacoeconomics. 2000;17(5):479–500.

16.

Booth A. How much searching is enough? Comprehensive versus optimal retrieval for technology assessments. Int J Technol Assess Health Care. 2010;26(4):431–435.

17.

Paisley S. Classification of evidence in decision-analytic models of cost-effectiveness: a content analysis of published reports. Int J Technol Assess Health Care. 2010;26(4):458–462.

18.

Braithwaite RS, Roberts MS, Justice AC. Incorporating quality of evidence into decision analytic modeling. Ann Intern Med. 2007;146(2):133–141.

19.

Nuijten MJ. The selection of data sources for use in modelling studies. Pharmacoeconomics. 1998;13(3):305–316.

20.

Cooper NJ, Sutton AJ, Ades AE, Paisley S, Jones DR. Use of evidence in economic decision models: practical issues and methodological challenges. Health Econ. 2007;16(12):1277–1286.

21.

Cooper K, Brailsford SC, Davies R. Choice of modelling technique for evaluating health care interventions. J Oper Res Soc. 2007;58(2):168–176.

22.

Miners A. Estimating ‘costs’ for cost-effectiveness analysis. Pharmacoeconomics. 2008;26(9):745–751.

23.

Hay JW, Smeeding J, Carroll NV, et al. Good research practices for measuring drug costs in cost effectiveness analyses: issues and recommendations: the ISPOR Drug Cost Task Force report – Part I. Value Health. 2010;13(1):3–7.

24.

Shi L, Hodges M, Drummond M, et al. Good research practices for measuring drug costs in cost-effectiveness analyses: an international perspective: the ISPOR Drug Cost Task Force report – Part VI. Value Health. 2010;13(1):28–33.

25.

Brouwer WB, Niessen LW, Postma MJ, Rutten FF. Need for differential discounting of costs and health effects in cost effectiveness analyses. BMJ. 2005;331(7514):446–448.

26.

Claxton K, Sculpher M, Culyer A, et al. Discounting and cost-effectiveness in NICE – stepping back to sort out a confusion. Health Econ. 2006;15(1):1–4.

27.

Claxton K, Paulden M, Gravelle H, Brouwer W, Culyer AJ. Discounting and decision making in the economic evaluation of health-care technologies. Health Econ. 2011;20(1):2–15.

28.

Gravelle H, Brouwer W, Niessen L, Postma M, Rutten F. Discounting in economic evaluations: stepping forward towards optimal decision rules. Health Econ. 2007;16(3):307–317.

29.

Nord E. Discounting future health benefits: the poverty of consistency arguments. Health Econ. 2011;20(1):16–26.

30.

Gray AM, Clarke PM, Wolstenholme J, Wordsworth, S. Applied Methods of Cost-effectiveness Analysis in Healthcare. USA: Oxford University Press; 2010.

31.

Brazier J. Valuing health States for use in cost-effectiveness analysis. Pharmacoeconomics. 2008;26(9):769–779.

32.

Neumann PJ, Stone PW, Chapman RH, Sandberg EA, Bell CM. The quality of reporting in published cost-utility analyses, 1976–1997. Ann Intern Med. 2000;132(12):964–972.

33.

Craig D, McDaid C, Fonseca T, Stock C, Duffy S, Woolacott N. Are adverse effects incorporated in economic models? An initial review of current practice. Health Technol Assess. 2009;13(62):1–71, 97–181, iii.

34.

McDonough CM, Tosteson AN. Measuring preferences for cost-utility analysis: how choice of method may influence decision-making. Pharmacoeconomics. 2007;25(2):93–106.

35.

Gold MR, Stevenson D, Fryback DG. HALYS and QALYS and DALYS, Oh My: similarities and differences in summary measures of population Health. Annu Rev Public Health. 2002;23:115–34.

36.

Briggs AH, Gray AM. Handling uncertainty when performing economic evaluation of healthcare interventions. Health Technol Assess. 1999;3(2):1–134.

37.

Briggs AH, Gray AM. Handling uncertainty in economic evaluations of healthcare interventions. BMJ. 1999;319(7210):635–638.

38.

Claxton K. Exploring uncertainty in cost-effectiveness analysis. Pharmacoeconomics. 2008;26(9):781–798.

39.

Groot W, van den Brink HM. The value of health. BMC Health Serv Res. 2008;8:136.

40.

Sculpher M. Subgroups and heterogeneity in cost-effectiveness analysis. Pharmacoeconomics. 2008;26(9):799–806.

41.

Brisson M, Edmunds WJ. Impact of model, methodological, and parameter uncertainty in the economic analysis of vaccination programs. Med Decis Making. 2006;26(5):434–446.

42.

Groot KB, Weinstein MC, Stijnen T, Heijenbrok-Kal MH, Hunink MG. Uncertainty and patient heterogeneity in medical decision models. Med Decis Making. 2010;30(2):194–205.

43.

Andronis L, Barton P, Bryan S. Sensitivity analysis in economic evaluation: an audit of NICE current practice and a review of its use and value in decision-making. Health Technol Assess. 2009;13(29):iii, ix–xi, 1–61.

44.

Limwattananon S. Handling uncertainty of the economic evaluation result: sensitivity analysis. J Med Assoc Thai. 2008;91(Suppl 2):S59–65.

45.

Bojke L, Claxton K, Palmer S; Sculpher M. Defining and characterising structural uncertainty in decision analytic models. 2006; Available from: http://www.york.ac.uk/che/pdf/rp9.pdf. Accessed August 8, 2011.

46.

Strong M, Oakley JE, Chilcott J. Managing structural uncertainty in health economic decision models: a discrepancy approach. J Royal Stat Soc Ser C Appl Stat. 2012;61:25–45.

47.

Schlesinger S. Terminology for model credibility. Simulation. 1979;32:103.

48.

McCabe C, Dixon S. Testing the validity of cost-effectiveness models. Pharmacoeconomics. 2000;17:501–513.

49.

Kim LG, Thompson SG. Uncertainty and validation of health economic decision models. Health Econ. 2010;19(1):43–55.

50.

Sendi PP, Craig BA, Pfluger D, Gafni A, Bucher HC. Systematic validation of disease models for pharmacoeconomic evaluations. Swiss HIV Cohort Study. J Eval Clin Pract. 1999;5(3):283–295.

51.

Halpern MT, Luce BR, Brown RE, Geneste B. Health and economic outcomes modeling practices: a suggested framework. Value in Health. 1998;1:131–147.

52.

Weinstein MC, O’Brien B, Hornberger J, Jackson J, Johannesson M, McCabe C, Luce BR. Principles of good practice for decision analytic modeling in health-care evaluation: Report of the ISPOR Task Force on Good Research Practices – modeling studies. Value Health. 2003;6(1):9–17.

53.

Barton GR, Briggs AH, Fenwick EA. Optimal cost-effectiveness decisions: the role of the cost-effectiveness acceptability curve (CEAC), the cost-effectiveness acceptability frontier (CEAF), and the expected value of perfection information (EVPI). Value Health. 2008;11(5):886–897.

54.

Fenwick E, Briggs A. Cost-effectiveness acceptability curves in the dock: case not proven? Med Decis Making. 2007;27(2):93–95.

55.

Fenwick E, Claxton K, Sculpher M. Representing uncertainty: the role of cost-effectiveness acceptability curves. Health Econ. 2001;10(8):779–787.

56.

Fenwick E, O’Brien BJ, Briggs A. Cost-effectiveness acceptability curves – facts, fallacies and frequently asked questions. Health Econ. 2004;13(5):405–415.

57.

Groot Koerkamp B, Hunink MG, Stijnen T, Hammitt JK, Kuntz KM, Weinstein MC. Limitations of acceptability curves for presenting uncertainty in cost-effectiveness analysis. Med Decis Making. 2007;27(2):101–111.

58.

Briggs AH, Wonderling DE, Mooney CZ. Pulling cost-effectiveness analysis up by its bootstraps: a non-parametric approach to confidence interval estimation. Health Econ. 1997;6(4):327–340.

59.

Fan M-Y, Zhou X-H. A simulation study to compare methods for constructing confidence intervals for the incremental cost-effectiveness ratio. Health Serv Outcomes Res Method. 2007;7(1–2):57–77.

60.

Dinh P, Zhou XH. Nonparametric statistical methods for cost-effectiveness analyses. Biometrics. 2006;62(2):576–588.

61.

Polsky D, Glick HA, Willke R, Schulman K. Confidence intervals for cost-effectiveness ratios: a comparison of four methods. Health Econ. 1997;6(3):243–252.

62.

Eddy DM. Accuracy versus transparency in pharmacoeconomic modelling: finding the right balance. Pharmacoeconomics. 2006;24(9):837–844.

63.

McCabe C. Guidance on good practice in cost-effectiveness modeling: is more needed? Med Decis Making. 2007;27(4):350–351.

64.

McKenna C, McDaid C, Suekarran S, et al. Enhanced external counterpulsation for the treatment of stable angina and heart failure: a systematic review and economic analysis. Health Technol Assess. 2009;13(24):iii–iv, ix–xi, 1–90.

65.

Saramago P, Manca A, Sutton AJ. Deriving input parameters for cost-effectiveness modeling: taxonomy of data types and approaches to their statistical synthesis. Value Health. 2012;15(5):639–649.

66.

Tappenden P. Conceptual modelling for health economic model development. HEDS discussion paper no 12.05. Available from: http://www.shef.ac.uk/polopoly_fs/1.172572!/file/HEDSDP1205.pdf. Accessed September 9, 2016.

67.

Strong, M. Managing Structural Uncertainty in Health Economic Decision Models. Available from: http://etheses.whiterose.ac.uk/2205/. Accessed June 26, 2012.

68.

Weinstein MC, Toy EL, Sandberg EA, et al. Modeling for health care and other policy decisions: uses, roles, and validity. Value Health. 2001;4(5):348–361.

69.

Black WC. The CE plane: a graphic representation of cost-effectiveness. Med Decis Making. 1990;10(3):212–214.

Creative Commons License © 2016 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.