Predictive toxicogenomics in preclinical discovery

Methods Mol Biol. 2008:460:89-112. doi: 10.1007/978-1-60327-048-9_5.

Abstract

The failure of drug candidates during clinical trials due to toxicity, especially hepatotoxicity, is an important and continuing problem in the pharmaceutical industry. This chapter explores new predictive toxicogenomics approaches to better understand the hepatotoxic potential of human drug candidates and to assess their toxicity earlier in the drug development process. The underlying data consisted of two commercial knowledgebases that employed a hybrid experimental design in which human drug-toxicity information was extracted from the literature, dichotomized, and merged with rat-based gene expression measures (primary isolated hepatocytes and whole liver). Toxicity classification rules were built using a stochastic gradient boosting machine learner, with classification error estimated using a modified bootstrap estimate of true error. Several types of clustering methods were also applied, based on sets of compounds and genes. Robust classification rules were constructed for both in vitro (hepatocytes) and in vivo (liver) data, based on a high-dose, 24-h design. There appeared to be little overlap between the two classifiers, at least in terms of their gene lists. Robust classifiers could not be fitted when earlier time points and/or low-dose data were included, indicating that experimental design is important for these systems. Our results suggest development of a compound screening assay based on these toxicity classifiers appears feasible, with classifier operating characteristics used to tune a screen for a specific implementation within preclinical testing paradigms.

MeSH terms

  • Drug Design*
  • Drug Evaluation, Preclinical
  • Genomics*
  • Stochastic Processes
  • Toxicology*