Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Dec 1;34(23):4087-4094.
doi: 10.1093/bioinformatics/bty449.

Transfer learning for biomedical named entity recognition with neural networks

Affiliations

Transfer learning for biomedical named entity recognition with neural networks

John M Giorgi et al. Bioinformatics. .

Abstract

Motivation: The explosive increase of biomedical literature has made information extraction an increasingly important tool for biomedical research. A fundamental task is the recognition of biomedical named entities in text (BNER) such as genes/proteins, diseases and species. Recently, a domain-independent method based on deep learning and statistical word embeddings, called long short-term memory network-conditional random field (LSTM-CRF), has been shown to outperform state-of-the-art entity-specific BNER tools. However, this method is dependent on gold-standard corpora (GSCs) consisting of hand-labeled entities, which tend to be small but highly reliable. An alternative to GSCs are silver-standard corpora (SSCs), which are generated by harmonizing the annotations made by several automatic annotation systems. SSCs typically contain more noise than GSCs but have the advantage of containing many more training examples. Ideally, these corpora could be combined to achieve the benefits of both, which is an opportunity for transfer learning. In this work, we analyze to what extent transfer learning improves upon state-of-the-art results for BNER.

Results: We demonstrate that transferring a deep neural network (DNN) trained on a large, noisy SSC to a smaller, but more reliable GSC significantly improves upon state-of-the-art results for BNER. Compared to a state-of-the-art baseline evaluated on 23 GSCs covering four different entity classes, transfer learning results in an average reduction in error of approximately 11%. We found transfer learning to be especially beneficial for target datasets with a small number of labels (approximately 6000 or less).

Availability and implementation: Source code for the LSTM-CRF is available at https://github.com/Franck-Dernoncourt/NeuroNER/ and links to the corpora are available at https://github.com/BaderLab/Transfer-Learning-BNER-Bioinformatics-2018/.

Supplementary information: Supplementary data are available at Bioinformatics online.

PubMed Disclaimer

Figures

Fig. 1.
Fig. 1.
Architecture of the hybrid long short-term memory network-conditional random field (LSTM-CRF) model for named entity recognition (NER). Here, xi is the i-th token in the input sequence, xij is the j-th character of the i-th token, (i) is the number of characters in the i-th token and ei is the character-enhanced token embedding of the i-th token. For transfer learning experiments, we train the parameters of the model on a source dataset, and transfer all of the parameters to initialize the model for training on a target dataset
Fig. 2.
Fig. 2.
Impact of transfer learning on the F1-scores. Baseline corresponds to training the model only with the target dataset, and transfer learning corresponds to training on the source dataset followed by training on the target dataset. The number of training examples used in the target training set is reported as a percent of the overall GSC size (e.g. for a GSC of 100 documents, a target train set size of 60% corresponds to 60 documents). Error bars represent the standard deviation (SD) for n = 3 trials. If an error bar is not shown, it was smaller than the size of the data point symbol. (a-d) Impact of transfer learning on the F1-scores of four select copora
Fig. 3.
Fig. 3.
Box plots representing absolute F1-score improvement over the baseline after transfer learning, grouped by the total number of annotations in the target gold-standard corpora (GSCs). Bin boundaries were generated using the R package binr (Izrailev, 2015). Scores for individual GSCs are plotted, where point shapes indicate statistical significance (P0.05)
Fig. 4.
Fig. 4.
Venn diagrams demonstrating the area of overlap among the true-positive (TP), false-negative (FN) and false-positive (FP) sets of the baseline (B) and transfer learning (TL) methods per entity class

Similar articles

Cited by

References

    1. Aerts S., et al. (2006) Gene prioritization through genomic data fusion. Nat. Biotechnol., 24, 537–544. - PubMed
    1. Akhondi S.A., et al. (2014) Annotated chemical patent corpus: a gold standard for text mining. PLoS One, 9, e107477. - PMC - PubMed
    1. Al-Aamri A., et al. (2017) Constructing genetic networks using biomedical literature and rare event classification. Sci. Rep., 7, 15784. - PMC - PubMed
    1. Bagewadi S., et al. (2014) Detecting miRNA mentions and relations in biomedical literature. F1000Research, 3. doi: 10.12688/f1000research.4591.3. - PMC - PubMed
    1. Baxter J., et al. (2000) A model of inductive bias learning. J. Artif. Intell. Res. (JAIR), 12, 3.

Publication types