Despite the growing use of decision analytic modelling in cost-effectiveness analysis, there is a relatively small literature on what constitutes good practice in decision analysis. The aim of this paper is to consider the concept of 'validity' and 'quality' in this area of evaluation, and to suggest a framework by which quality can be demonstrated on the part of the analyst and assessed by the reviewer and user. The paper begins by considering the purpose of cost-effectiveness models and argues that the their role is to identify optimum treatment decisions in the context of uncertainty about future states of the world. The issue of whether such models can be defined as 'scientific' is considered. The notion that decision analysis undertaken at time t can only be considered scientific if its outputs closely predict the results of a trial undertaken at time t + 1 is rejected as this ignores the need to make decisions on the basis of currently available evidence. Rather, the scientific characteristic of decision models is based on the fact that, in principle at least, such analyses can be falsified by comparison of two states of the world, one where resource allocation decisions are based on formal decision analysis and the other where such decisions are not. This section of the paper also rejects the idea of exact codification of scientific method in general, and of decision analysis in particular, as this risks rejecting potentially valuable models, may discourage the development of novel methods and can distort research priorities. However, the paper argues that it is both possible and necessary to develop a framework for assessing quality in decision models. Building on earlier work, various dimensions of quality in decision modelling are considered: model structure (disease states, options, time horizon and cycle length); data (identification, incorporation, handling uncertainty); and consistency (internal and external). Within this taxonomy a (nonexhaustive) list of questions about quality is suggested which are illustrated by their application to a specific published model. The paper argues that such a framework can never be prescriptive about every aspect of decision modelling. Rather, it should encourage the analyst to provide an explicit and comprehensive justification of their methods, and allow the user of the model to make an informed judgment about the relevance, coherence and usefulness of the analysis.