Do you see what I see? Can non-experts with minimal training reproduce expert ratings in behavioral assessments of working dogs?

Behav Processes. 2015 Jan:110:105-16. doi: 10.1016/j.beproc.2014.09.028. Epub 2014 Sep 28.

Abstract

Working-dog organizations often use behavioral ratings by experts to evaluate a dog's likelihood of success. However, these experts are frequently under severe time constraints. One way to alleviate the pressure on limited organizational resources would be to use non-experts to assess dog behavior. Here, in populations of military working dogs (Study 1) and explosive-detection dogs (Study 2), we evaluated the reliability and validity of behavioral ratings assessed by minimally trained non-experts from videotapes. Analyses yielded evidence for generally good levels of inter-observer reliability and criterion validity (indexed by convergence between the non-expert ratings and ratings made previously by experts). We found some variation across items in Study 2 such that reliability and validity was significantly lower for three out of the 18 items, and one item had reliability and validity estimates that were impacted heavily by the behavioral test environment. There were no differences in reliability and validity based on the age of the dog. Overall the results suggest that ratings made by minimally trained non-experts for most items can serve as a viable alternative to expert ratings freeing limited resources of highly trained staff. This article is part of a Special Issue entitled: Canine Behavior.

Keywords: Behavior assessment; Dog; Dog experience; Reliability; Validity.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Animals
  • Aptitude*
  • Behavior, Animal / physiology*
  • Dogs
  • Humans
  • Observer Variation
  • Reproducibility of Results
  • Work*