Word2vec convolutional neural networks for classification of news articles and tweets

PLoS One. 2019 Aug 22;14(8):e0220976. doi: 10.1371/journal.pone.0220976. eCollection 2019.

Abstract

Big web data from sources including online news and Twitter are good resources for investigating deep learning. However, collected news articles and tweets almost certainly contain data unnecessary for learning, and this disturbs accurate learning. This paper explores the performance of word2vec Convolutional Neural Networks (CNNs) to classify news articles and tweets into related and unrelated ones. Using two word embedding algorithms of word2vec, Continuous Bag-of-Word (CBOW) and Skip-gram, we constructed CNN with the CBOW model and CNN with the Skip-gram model. We measured the classification accuracy of CNN with CBOW, CNN with Skip-gram, and CNN without word2vec models for real news articles and tweets. The experimental results indicated that word2vec significantly improved the accuracy of the classification model. The accuracy of the CBOW model was higher and more stable when compared to that of the Skip-gram model. The CBOW model exhibited better performance on news articles, and the Skip-gram model exhibited better performance on tweets. Specifically, CNN with word2vec models was more effective on news articles when compared to that on tweets because news articles are typically more uniform when compared to tweets.

MeSH terms

  • Deep Learning / statistics & numerical data*
  • Humans
  • Information Dissemination
  • Social Media / statistics & numerical data*

Associated data

  • figshare/10.6084/m9.figshare.5183710.v1

Grants and funding

The authors received no specific funding for this work.