Background: Internet survey modalities often compare unfavorably with traditional survey modalities, particularly with respect to response rates. Response to Internet surveys can be affected by the distribution options and response/collection features employed as well as the existence of automated (out-of-office) replies, automated forwarding, server rejection, and organizational or personal spam filters. However, Internet surveys also provide unparalleled opportunities to track study subjects and examine many of the factors influencing the determination of response rates. Tracking data available for Internet surveys provide detailed information and immediate feedback on a significant component of response that other survey modalities cannot match. This paper presents a response audit of a large Internet survey of more than 5000 cancer care providers and administrators in Ontario, Canada.
Objective: Building upon the CHEcklist for Reporting Results of Internet E-Surveys (CHERRIES), the main objectives of the paper are to (a) assess the impact of a range of factors on the determination of response rates for Internet surveys and (b) recommend steps for improving published descriptions of Internet survey methods.
Methods: We audited the survey response data, analyzing the factors that affected the numerator and denominator in the ultimate determination of response. We also conducted a sensitivity analysis to account for the inherent uncertainty associated with the impact of some of the factors on the response rates.
Results: The survey was initially sent out to 5636 health care providers and administrators. The determination of the numerator was influenced by duplicate/unattached responses and response completeness. The numerator varied from a maximum of 2031 crude (unadjusted) responses to 1849 unique views, 1769 participants, and 1616 complete responses. The determination of the denominator was influenced by forwarding of the invitation email to unknown individuals, server rejections, automated replies, spam filters, and 'opt out' options. Based on these factors, the denominator varied from a minimum of 5106 to a maximum of 5922. Considering the different assumptions for the numerator and the denominator, the sensitivity analysis resulted in a 12.5% variation in the response rate (from minimum of 27.3% to maximum of 39.8%) with a best estimate of 32.8%.
Conclusions: Depending on how the numerator and denominator are chosen, the resulting response rates can vary widely. The CHERRIES statement was an important advance in identifying key characteristics of Internet surveys that can influence response rates. This response audit suggests the need to further clarify some of these factors when reporting on Internet surveys for health care providers and administrators, particularly when using commercially available Internet survey packages for specified, rather than convenience, samples.