Comparison of two computer algorithms to identify surgical site infections

Surg Infect (Larchmt). 2011 Dec;12(6):459-64. doi: 10.1089/sur.2010.109. Epub 2011 Dec 2.


Background: Surgical site infections (SSIs), the second most common healthcare-associated infections, increase hospital stay and healthcare costs significantly. Traditional surveillance of SSIs is labor-intensive. Mandatory reporting and new non-payment policies for some SSIs increase the need for efficient and standardized surveillance methods. Computer algorithms using administrative, clinical, and laboratory data collected routinely have shown promise for complementing traditional surveillance.

Methods: Two computer algorithms were created to identify SSIs in inpatient admissions to an urban, academic tertiary-care hospital in 2007 using the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis codes (Rule A) and laboratory culture data (Rule B). We calculated the number of SSIs identified by each rule and both rules combined and the percent agreement between the rules. In a subset analysis, the results of the rules were compared with those of traditional surveillance in patients who had undergone coronary artery bypass graft surgery (CABG).

Results: Of the 28,956 index hospital admissions, 5,918 patients (20.4%) had at least one major surgical procedure. Among those and readmissions within 30 days, the ICD-9-CM-only rule identified 235 SSIs, the culture-only rule identified 287 SSIs; combined, the rules identified 426 SSIs, of which 96 were identified by both rules. Positive and negative agreement between the rules was 36.8% and 97.1%, respectively, with a kappa of 0.34 (95% confidence interval [CI] 0.27-0.41). In the subset analysis of patients who underwent CABG, of the 22 SSIs identified by traditional surveillance, Rule A identified 19 (86.4%) and Rule B identified 13 (59.1%) cases. Positive and negative agreement between Rules A and B within these "positive controls" was 81.3% and 50.0% with a kappa of 0.37 (95% CI 0.04-0.70).

Conclusion: Differences in the rates of SSI identified by computer algorithms depend on sources and inherent biases in electronic data. Different algorithms may be appropriate, depending on the purpose of case identification. Further research on the reliability and validity of these algorithms and the impact of changes in reimbursement on clinician practices and electronic reporting is suggested.

Publication types

  • Comparative Study
  • Research Support, N.I.H., Extramural

MeSH terms

  • Algorithms*
  • Data Collection / methods*
  • Diagnosis, Computer-Assisted*
  • Humans
  • International Classification of Diseases
  • Length of Stay / statistics & numerical data
  • New York City
  • Patient Readmission / statistics & numerical data
  • Reoperation / statistics & numerical data
  • Surgical Wound Infection / epidemiology*