Automated Classification of Circulating Tumor Cells and the Impact of Interobsever Variability on Classifier Training and Performance

J Immunol Res. 2015;2015:573165. doi: 10.1155/2015/573165. Epub 2015 Oct 4.


Application of personalized medicine requires integration of different data to determine each patient's unique clinical constitution. The automated analysis of medical data is a growing field where different machine learning techniques are used to minimize the time-consuming task of manual analysis. The evaluation, and often training, of automated classifiers requires manually labelled data as ground truth. In many cases such labelling is not perfect, either because of the data being ambiguous even for a trained expert or because of mistakes. Here we investigated the interobserver variability of image data comprising fluorescently stained circulating tumor cells and its effect on the performance of two automated classifiers, a random forest and a support vector machine. We found that uncertainty in annotation between observers limited the performance of the automated classifiers, especially when it was included in the test set on which classifier performance was measured. The random forest classifier turned out to be resilient to uncertainty in the training data while the support vector machine's performance is highly dependent on the amount of uncertainty in the training data. We finally introduced the consensus data set as a possible solution for evaluation of automated classifiers that minimizes the penalty of interobserver variability.

MeSH terms

  • Algorithms
  • Antigens, Surface / metabolism
  • Bayes Theorem
  • Biomarkers
  • Humans
  • Microscopy, Fluorescence / methods
  • Microscopy, Fluorescence / standards
  • Neoplasms / diagnosis*
  • Neoplasms / metabolism
  • Neoplastic Cells, Circulating / metabolism
  • Neoplastic Cells, Circulating / pathology*
  • Observer Variation
  • Reproducibility of Results
  • Support Vector Machine*


  • Antigens, Surface
  • Biomarkers