Objective: The aims of this research were (a) to study the interrater reliability of a posture observation method, (b) to test the impact of different posture categorization systems on interrater reliability, and (c) to provide guidelines for improving interrater reliability.
Background: Estimation of posture through observation is challenging. Previous studies have shown varying degrees of validity and reliability, providing little information about conditions necessary to achieve acceptable reliability.
Method: Seven raters estimated posture angles from video recordings. Different measures of interrater reliability, including percentage agreement, precision, expression as interrater standard deviation, and intraclass correlation coefficients (ICC), were computed.
Results: Some posture parameters, such as the upper arm flexion and extension, had ICCs > or = 0.50. Most posture parameters had a precision around the 10 degrees range. The predefined categorization and 300 posture categorization strategies showed substantially better agreement among the raters than did the 10 degrees strategy.
Conclusions: Different interrater reliability measures described different aspects of agreement for the posture observation tool. The level of agreement differed substantially between the agreement measures used. Observation of large body parts generally resulted in better reliability. Wider width angle intervals resulted in better percentage agreement compared with narrower intervals. For most postures, 30 degrees-angle intervals are appropriate. Training aimed at using a properly designed data entry system, and clear posture definitions with relevant examples, including definitions of the neutral positions of the various body parts, will help improve interrater reliability.
Application: The results provide ergonomics practitioners with information about the interrater reliability ofa postural observation method and guidelines for improving interrater reliability for video-recorded field data.