An evaluation of expert human and automated Abbreviated Injury Scale and ICD-9-CM injury coding

J Trauma. 1994 Apr;36(4):499-503. doi: 10.1097/00005373-199404000-00007.


Two hundred ninety-five injury descriptions from 135 consecutive patients treated at a level-I trauma center were coded by three human coders (H1, H2, H3) and by TRI-CODE (T), a PC-based artificial intelligence software program. Two study coders are nationally recognized experts who teach AIS coding for its developers (the Association for the Advancement of Automotive Medicine); the third has 5 years experience in ICD and AIS coding. A "correct coding" (CC) was established for the study injury descriptions. Coding results were obtained for each coder relative to the CC. The correct ICD codes were selected in 96% of cases for H2, 92% for H1, 91% for T, and 86% for H3. The three human coders agreed on 222 (75%) injuries. The correct 7 digit AIS codes (six identifying digits and the severity digit) were selected in 93% of cases for H2, 87% for T, 77% for H3, and 73% for H1. The correct AIS severity codes (seventh digit only) were selected in 98.3% of cases for H2, 96.3% for T, 93.9% for H3, and 90.8% for H1. On the basis of the weighted kappa statistic TRI-CODE had excellent agreement with the correct coding (CC) of AIS severities. Each human coder had excellent agreement with CC and with TRI-CODE. Coders H1 and H2 were in excellent agreement. Coder H3 was in good agreement with H1 and H2. However, errors among the human coders often occur for different codes, accentuating the variability.(ABSTRACT TRUNCATED AT 250 WORDS)

Publication types

  • Comparative Study

MeSH terms

  • Abbreviated Injury Scale*
  • Artificial Intelligence*
  • Classification
  • Humans
  • Observer Variation
  • Wounds and Injuries / classification*