Analysis of the IJCNN 2007 agnostic learning vs. prior knowledge challenge

Neural Netw. 2008 Mar-Apr;21(2-3):544-50. doi: 10.1016/j.neunet.2007.12.024. Epub 2007 Dec 27.

Abstract

We organized a challenge for IJCNN 2007 to assess the added value of prior domain knowledge in machine learning. Most commercial data mining programs accept data pre-formatted in the form of a table, with each example being encoded as a linear feature vector. Is it worth spending time incorporating domain knowledge in feature construction or algorithm design, or can off-the-shelf programs working directly on simple low-level features do better than skilled data analysts? To answer these questions, we formatted five datasets using two data representations. The participants in the "prior knowledge" track used the raw data, with full knowledge of the meaning of the data representation. Conversely, the participants in the "agnostic learning" track used a pre-formatted data table, with no knowledge of the identity of the features. The results indicate that black-box methods using relatively unsophisticated features work quite well and rapidly approach the best attainable performance. The winners on the prior knowledge track used feature extraction strategies yielding a large number of low-level features. Incorporating prior knowledge in the form of generic coding/smoothing methods to exploit regularities in data is beneficial, but incorporating actual domain knowledge in feature construction is very time consuming and seldom leads to significant improvements. The AL vs. PK challenge web site remains open for post-challenge submissions: http://www.agnostic.inf.ethz.ch/.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Artificial Intelligence*
  • Computational Biology
  • Humans
  • Information Storage and Retrieval
  • Knowledge*
  • Learning / physiology*
  • Natural Language Processing
  • Pattern Recognition, Automated
  • ROC Curve