A multi-modal transformer for cell type-agnostic regulatory predictions

Cell Genom. 2025 Feb 12;5(2):100762. doi: 10.1016/j.xgen.2025.100762. Epub 2025 Jan 29.

Abstract

Sequence-based deep learning models have emerged as powerful tools for deciphering the cis-regulatory grammar of the human genome but cannot generalize to unobserved cellular contexts. Here, we present EpiBERT, a multi-modal transformer that learns generalizable representations of genomic sequence and cell type-specific chromatin accessibility through a masked accessibility-based pre-training objective. Following pre-training, EpiBERT can be fine-tuned for gene expression prediction, achieving accuracy comparable to the sequence-only Enformer model, while also being able to generalize to unobserved cell states. The learned representations are interpretable and useful for predicting chromatin accessibility quantitative trait loci (caQTLs), regulatory motifs, and enhancer-gene links. Our work represents a step toward improving the generalization of sequence-based deep neural networks in regulatory genomics.

Keywords: chromatin accessibility; deep learning; gene regulation; genomics; sequence code; transformer.

MeSH terms

  • Chromatin* / genetics
  • Chromatin* / metabolism
  • Deep Learning*
  • Genome, Human
  • Genomics
  • Humans
  • Neural Networks, Computer
  • Quantitative Trait Loci*

Substances

  • Chromatin