Skip to main page content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
, 356 (6334), 183-186

Semantics Derived Automatically From Language Corpora Contain Human-Like Biases

Affiliations

Semantics Derived Automatically From Language Corpora Contain Human-Like Biases

Aylin Caliskan et al. Science.

Abstract

Machine learning is a means to derive artificial intelligence by discovering patterns in existing data. Here, we show that applying machine learning to ordinary human language results in human-like semantic biases. We replicated a spectrum of known biases, as measured by the Implicit Association Test, using a widely used, purely statistical machine-learning model trained on a standard corpus of text from the World Wide Web. Our results indicate that text corpora contain recoverable and accurate imprints of our historic biases, whether morally neutral as toward insects or flowers, problematic as toward race or gender, or even simply veridical, reflecting the status quo distribution of gender with respect to careers or first names. Our methods hold promise for identifying and addressing sources of bias in culture, including technology.

Comment in

  • An AI stereotype catcher.
    Greenwald AG. Greenwald AG. Science. 2017 Apr 14;356(6334):133-134. doi: 10.1126/science.aan0649. Epub 2017 Apr 13. Science. 2017. PMID: 28408558 No abstract available.

Similar articles

See all similar articles

Cited by 25 articles

See all "Cited by" articles

Publication types

LinkOut - more resources

Feedback