The present study examined issues related to structural modeling of abilities by the use of simulated data as well as analysis of the standardization data from the Woodcock-Johnson-III. In both cases, results were evaluated with cross-validation. Simulation results showed that cross-validation with an independent data set was more successful in identifying the model that was used to generate test scores than were several fit indices. Analysis of the Woodcock-Johnson-III standardization data with cross-validation showed that bifactor models provided better fit than hierarchical or correlated factor models. This was true considering both fit indices and cross-validation. General and specific factors shared a considerable amount of variance as evaluated by using the bifactor models to partition variance. The results of the present study suggest that there is a certain degree of ambiguity in determining the exact amount of covariance in test performance accounted for by general and specific factors. This calls in to question the practice of adjusting or controlling for general abilities when evaluating measures of specific abilities. Evidence for the validity of a construct should not be limited to factor analysis of tests purported to measure that construct.
Keywords: WJ-III; bifactor models; cross-validation.
© The Author(s) 2015.