When machine learning algorithms are applied to data collected during the course of clinical care, it is generally accepted that the data has not been consistently collected. The absence of expected data elements is common and the mechanism through which a data element is missing often involves the clinical relevance of that data element in a specific patient. Therefore, the absence of data may have information value of its own. In the process of designing an application intended to support a medical problem list, we have studied whether the "missingness" of clinical data can provide useful information in building prediction models. In this study, we experimented with four methods of treating missing values in a clinical data set-two of them explicitly model the absence or "missingness" of data. Each of these data sets were used to build four different kinds of Bayesian classifiers-a naive Bayes structure, a human-composed network structure, and two networks based on structural learning algorithms. We compared the performance between groups with and without explicit models of missingness using the area under the ROC curve. The results showed that in most cases the classifiers trained using the explicit missing value treatments performed better. The result suggests that information may exist in "missingness" itself. Thus, when designing a decision support system, we suggest one consider explicitly representing the presence/absence of data in the underlying logic.