Assessing and overcoming participant dishonesty in online data collection

Behav Res Methods. 2018 Aug;50(4):1563-1567. doi: 10.3758/s13428-017-0984-5.

Abstract

Crowdsourcing services, such as MTurk, have opened a large pool of participants to researchers. Unfortunately, it can be difficult to confidently acquire a sample that matches a given demographic, psychographic, or behavioral dimension. This problem exists because little information is known about individual participants and because some participants are motivated to misrepresent their identity with the goal of financial reward. Despite the fact that online workers do not typically display a greater than average level of dishonesty, when researchers overtly request that only a certain population take part in an online study, a nontrivial portion misrepresent their identity. In this study, a proposed system is tested that researchers can use to quickly, fairly, and easily screen participants on any dimension. In contrast to an overt request, the reported system results in significantly fewer (near zero) instances of participant misrepresentation. Tests for misrepresentations were conducted by using a large database of past participant records (~45,000 unique workers). This research presents and tests an important tool for the increasingly prevalent practice of online data collection.

Keywords: MTurk; Online participants; Participant honesty; Qualification; Sampling.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Crowdsourcing / methods*
  • Data Collection / methods*
  • Databases as Topic
  • Disclosure / standards*
  • Female
  • Humans
  • Internet* / statistics & numerical data
  • Male
  • Research Design*