Rates of False-Positive Classification Resulting From the Analysis of Additional Embedded Performance Validity Measures

Appl Neuropsychol Adult. 2015;22(5):335-47. doi: 10.1080/23279095.2014.938809. Epub 2015 Jan 13.

Abstract

Several studies have documented improvements in the classification accuracy of performance validity tests (PVTs) when they are combined to form aggregated models. Fewer studies have evaluated the impact of aggregating additional PVTs and changing the classification threshold within these models. A recent Monte Carlo simulation demonstrated that to maintain a false-positive rate (FPR) of ≤.10, only 1, 4, 8, 10, and 15 PVTs should be analyzed at classification thresholds of failing at least 1, at least 2, at least 3, at least 4, and at least 5 PVTs, respectively. The current study sought to evaluate these findings with embedded PVTs in a sample of real-life litigants and to highlight a potential danger in analytic flexibility with embedded PVTs. Results demonstrated that to maintain an FPR of ≤.10, only 3, 7, 10, 14, and 15 PVTs should be analyzed at classification thresholds of failing at least 1, at least 2, at least 3, at least 4, and at least 5 PVTs, respectively. Analyzing more than these numbers of PVTs resulted in a dramatic increase in the FPR. In addition, in the most extreme case, flexibility in analyzing and reporting embedded PVTs increased the FPR by 67%. Given these findings, a more objective approach to analyzing and reporting embedded PVTs should be introduced.

Keywords: embedded measures; neuropsychology; symptom validity testing.

MeSH terms

  • Adult
  • Data Interpretation, Statistical*
  • Female
  • Humans
  • Male
  • Malingering / diagnosis*
  • Middle Aged
  • Monte Carlo Method
  • Neuropsychological Tests / statistics & numerical data*
  • Task Performance and Analysis*