Background: Institutional review boards (IRBs) are charged with safeguarding potential research subjects with limited literacy but may have an inadvertent role in promulgating unreadable consent forms. We hypothesized that text provided by IRBs in informed-consent forms falls short of the IRBs' own readability standards and that readability is influenced by the level of research activity, local literacy rates, and federal oversight.
Methods: To test these hypotheses, we conducted a cross-sectional study linking data from several public-use sources. A total of 114 Web sites of U.S. medical schools were surveyed for IRB readability standards and informed-consent-form templates. Actual readability was measured with the Flesch-Kincaid scale, which assigns a score on the basis of the minimal grade level required to read and understand English text (range, 0 to 12). Data on the level of research activity, local literacy rates, and federal oversight were obtained from organizational Web sites.
Results: The average readability score for text provided by IRBs was 10.6 (95 percent confidence interval, 10.3 to 10.8) on the Flesch-Kincaid scale. Specific readability standards, found on 61 Web sites (54 percent), ranged from a 5th-grade reading level to a 10th-grade reading level. The mean Flesch-Kincaid scores for the readability of sample text provided by IRBs exceeded the stated standard by 2.8 grade levels (95 percent confidence interval, 2.4 to 3.2; P<0.001). Readability was not associated with either the level of research funding (P=0.89) or local rates of literacy (P=0.92). However, the 52 schools that had been made subject to oversight by the Office for Human Research Protections (46 percent) had lower Flesch-Kincaid scores than the other schools (10.2 vs. 10.9, P=0.005).
Conclusions: IRBs commonly provide text for informed-consent forms that falls short of their own readability standards. Federal oversight is associated with better readability.
Copyright 2003 Massachusetts Medical Society