Development and evaluation of a quality score for abstracts

BMC Med Res Methodol. 2003 Feb 11;3:2. doi: 10.1186/1471-2288-3-2. Epub 2003 Feb 11.


Background: The evaluation of abstracts for scientific meetings has been shown to suffer from poor inter observer reliability. A measure was developed to assess the formal quality of abstract submissions in a standardized way.

Methods: Item selection was based on scoring systems for full reports, taking into account published guidelines for structured abstracts. Interrater agreement was examined using a random sample of submissions to the American Gastroenterological Association, stratified for research type (n = 100, 1992-1995). For construct validity, the association of formal quality with acceptance for presentation was examined. A questionnaire to expert reviewers evaluated sensibility items, such as ease of use and comprehensiveness.

Results: The index comprised 19 items. The summary quality scores showed good interrater agreement (intra class coefficient 0.60 - 0.81). Good abstract quality was associated with abstract acceptance for presentation at the meeting. The instrument was found to be acceptable by expert reviewers.

Conclusion: A quality index was developed for the evaluation of scientific meeting abstracts which was shown to be reliable, valid and useful.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Congresses as Topic*
  • Humans
  • Manuscripts as Topic*
  • Peer Review, Research / methods*
  • Publishing / standards*
  • Reproducibility of Results
  • Sensitivity and Specificity
  • Total Quality Management / methods