Objectives: To identify existing guidelines and develop a synthesised guideline plus accompanying checklist. In addition to provide guidance on key theoretical, methodological and practical issues and consider the implications of this research for what might be expected of future decision-analytic models.
Data sources: Electronic databases.
Review methods: A systematic review of existing good practice guidelines was undertaken to identify and summarise guidelines currently available for assessing the quality of decision-analytic models that have been undertaken for health technology assessment. A synthesised good practice guidance and accompanying checklist was developed. Two specific methods areas in decision modelling were considered. The first method's topic is the identification of parameter estimates from published literature. Parameter searches were developed and piloted using a case-study model. The second topic relates to bias in parameter estimates; that is, how to adjust estimates of treatment effect from observational studies where there are risks of selection bias. A systematic literature review was conducted to identify those studies looking at quantification of bias in parameter estimates and the implication of this bias.
Results: Fifteen studies met the inclusion criteria and were reviewed and consolidated into a single set of brief statements of good practice. From this, a checklist was developed and applied to three independent decision-analytic models. Although the checklist provided excellent guidance on some key issues for model evaluation, it was too general to pick up on the specific nuances of each model. The searches that were developed helped to identify important data for inclusion in the model. However, the quality of life searches proved to be problematic: the published search filters did not focus on those measures specific to cost-effectiveness analysis and although the strategies developed as part of this project were more successful few data were found. Of the 11 studies meeting the criteria on the effect of selection bias, five concluded that a non-randomised trial design is associated with bias and six studies found 'similar' estimates of treatment effects from observational studies or non-randomised clinical trials and randomised controlled trials (RCTs). One purpose of developing the synthesised guideline and checklist was to provide a framework for critical appraisal by the various parties involved in the health technology assessment process. First, the guideline and checklist can be used by groups that are reviewing other analysts' models and, secondly, the guideline and checklist could be used by the various analysts as they develop their models (to use it as a check on how they are developing and reporting their analyses). The Expert Advisory Group (EAG) that was convened to discuss the potential role of the guidance and checklist felt that, in general, the guidance and checklist would be a useful tool, although the checklist is not meant to be used exclusively to determine a model's quality, and so should not be used as a substitute for critical appraisal.
Conclusions: The review of current guidelines showed that although authors may provide a consistent message regarding some aspects of modelling, in other areas conflicting attributes are presented in different guidelines. In general, the checklist appears to perform well, in terms of identifying those aspects of the model that should be of particular concern to the reader. The checklist cannot, however, provide answers to the appropriateness of the model structure and structural assumptions, as these may be seen as a general problem with generic checklists and do not reflect any shortcoming with the synthesised guidance and checklist developed here. The assessment of the checklist, as well as feedback from the EAG, indicated the importance of its use in conjunction with a more general checklist or guidelines on economic evaluation. Further methods research into the following areas would be valuable: the quantification of selection bias in non-controlled studies and in controlled observational studies; the level of bias in the different non-RCT study designs; a comparison of results from RCTs with those from other non-randomised studies; assessment of the strengths and weaknesses of alternative ways to adjust for bias in a decision model; and how to prioritise searching for parameter estimates.