Does Twitter language reliably predict heart disease? A commentary on Eichstaedt et al. (2015a)

PeerJ. 2018 Sep 21:6:e5656. doi: 10.7717/peerj.5656. eCollection 2018.

Abstract

We comment on Eichstaedt et al.'s (2015a) claim to have shown that language patterns among Twitter users, aggregated at the level of US counties, predicted county-level mortality rates from atherosclerotic heart disease (AHD), with "negative" language being associated with higher rates of death from AHD and "positive" language associated with lower rates. First, we examine some of Eichstaedt et al.'s apparent assumptions about the nature of AHD, as well as some issues related to the secondary analysis of online data and to considering counties as communities. Next, using the data files supplied by Eichstaedt et al., we reproduce their regression- and correlation-based models, substituting mortality from an alternative cause of death-namely, suicide-as the outcome variable, and observe that the purported associations between "negative" and "positive" language and mortality are reversed when suicide is used as the outcome variable. We identify numerous other conceptual and methodological limitations that call into question the robustness and generalizability of Eichstaedt et al.'s claims, even when these are based on the results of their ridge regression/machine learning model. We conclude that there is no good evidence that analyzing Twitter data in bulk in this way can add anything useful to our ability to understand geographical variation in AHD mortality rates.

Keywords: Artifacts; Big data; Emotions; False positives; Heart disease; Language; Risk factors; Social media; Well-being.

Grants and funding

The authors received no funding for this work.