Assigning Factuality Values to Semantic Relations Extracted From Biomedical Research Literature

PLoS One. 2017 Jul 5;12(7):e0179926. doi: 10.1371/journal.pone.0179926. eCollection 2017.


Biomedical knowledge claims are often expressed as hypotheses, speculations, or opinions, rather than explicit facts (propositions). Much biomedical text mining has focused on extracting propositions from biomedical literature. One such system is SemRep, which extracts propositional content in the form of subject-predicate-object triples called predications. In this study, we investigated the feasibility of assessing the factuality level of SemRep predications to provide more nuanced distinctions between predications for downstream applications. We annotated semantic predications extracted from 500 PubMed abstracts with seven factuality values (fact, probable, possible, doubtful, counterfact, uncommitted, and conditional). We extended a rule-based, compositional approach that uses lexical and syntactic information to predict factuality levels. We compared this approach to a supervised machine learning method that uses a rich feature set based on the annotated corpus. Our results indicate that the compositional approach is more effective than the machine learning method in recognizing the factuality values of predications. The annotated corpus as well as the source code and binaries for factuality assignment are publicly available. We will also incorporate the results of the better performing compositional approach into SemMedDB, a PubMed-scale repository of semantic predications extracted using SemRep.

MeSH terms

  • Biomedical Research*
  • Data Mining*
  • Humans
  • Machine Learning
  • Natural Language Processing
  • Publications
  • Semantics

Grant support

This work was supported by the intramural research program at the U.S. National Library of Medicine, National Institutes of Health.