Self-Trained LMT for Semisupervised Learning

Comput Intell Neurosci. 2016:2016:3057481. doi: 10.1155/2016/3057481. Epub 2015 Dec 29.

Abstract

The most important asset of semisupervised classification methods is the use of available unlabeled data combined with a clearly smaller set of labeled examples, so as to increase the classification accuracy compared with the default procedure of supervised methods, which on the other hand use only the labeled data during the training phase. Both the absence of automated mechanisms that produce labeled data and the high cost of needed human effort for completing the procedure of labelization in several scientific domains rise the need for semisupervised methods which counterbalance this phenomenon. In this work, a self-trained Logistic Model Trees (LMT) algorithm is presented, which combines the characteristics of Logistic Trees under the scenario of poor available labeled data. We performed an in depth comparison with other well-known semisupervised classification methods on standard benchmark datasets and we finally reached to the point that the presented technique had better accuracy in most cases.

MeSH terms

  • Algorithms*
  • Benchmarking / statistics & numerical data
  • Humans
  • Learning / physiology*
  • Logistic Models*
  • Self-Control*
  • Supervised Machine Learning*