Advancements in imaging and molecular techniques enable the collection of subcellular-scale data. Diversity in measured features, resolution, and physical scope of capture across technologies and experimental protocols pose numerous challenges to integrating data with reference coordinate systems and across scales. This paper describes a collection of technologies that we have developed for mapping data across scales and modalities, such as genes to tissues, specifically in a 3D setting. Our collection of technologies include (i) an explicit censored data representation for the partial matching problem mapping whole brains to subsampled subvolumes, (ii) a multi, scale-space optimization technology for generating resampling grids optimized to represent spatial geometry at fixed complexities, and (iii) mutual-information based functional feature selection. We integrate these technologies with our cross-modality mapping algorithm through the use of image-varifold measure norms to represent universally data across scales and imaging modalities. Collectively, these methods afford efficient representations of peta-scale imagery providing the algorithms for mapping from the nano to millimeter scales, which we term cross-modality image-varifold LDDMM (xIV-LDDMM).
Keywords: cross-modality mapping; image varifold; multi-scale; omics data.