Computation for Latent Variable Model Estimation: A Unified Stochastic Proximal Framework

Psychometrika. 2022 Dec;87(4):1473-1502. doi: 10.1007/s11336-022-09863-9. Epub 2022 May 7.

Abstract

Latent variable models have been playing a central role in psychometrics and related fields. In many modern applications, the inference based on latent variable models involves one or several of the following features: (1) the presence of many latent variables, (2) the observed and latent variables being continuous, discrete, or a combination of both, (3) constraints on parameters, and (4) penalties on parameters to impose model parsimony. The estimation often involves maximizing an objective function based on a marginal likelihood/pseudo-likelihood, possibly with constraints and/or penalties on parameters. Solving this optimization problem is highly non-trivial, due to the complexities brought by the features mentioned above. Although several efficient algorithms have been proposed, there lacks a unified computational framework that takes all these features into account. In this paper, we fill the gap. Specifically, we provide a unified formulation for the optimization problem and then propose a quasi-Newton stochastic proximal algorithm. Theoretical properties of the proposed algorithms are established. The computational efficiency and robustness are shown by simulation studies under various settings for latent variable model estimation.

Keywords: Polyak–Ruppert averaging; latent variable models; penalized estimator; proximal algorithm; quasi-Newton methods; stochastic approximation.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms*
  • Computer Simulation
  • Likelihood Functions
  • Models, Theoretical*
  • Psychometrics