PAM: a propagation-based model for segmenting any 3D objects across multi-modal medical images

NPJ Digit Med. 2025 Dec 2;8(1):753. doi: 10.1038/s41746-025-02087-y.

Abstract

Volumetric segmentation is a major challenge in medical imaging, as current methods require extensive annotations and retraining, limiting transferability across objects. We present PAM, a propagation-based framework that generates 3D segmentations from a minimal 2D prompt. PAM integrates a CNN-based UNet for intra-slice features with Transformer attention for inter-slice propagation, capturing structural and semantic continuity to enable robust cross-object generalization. Across 44 diverse datasets, PAM outperformed MedSAM and SegVol, improving average DSC by 19.3%. It maintained stable performance under variations in prompts (P ≥ 0.5985) and propagation settings (P ≥ 0.6131), while achieving faster inference (P < 0.001) and reducing user interaction time by 63.6%. Gains were strongest for irregular objects, with improvements negatively correlated with object regularity (r < -0.1249). By delivering accurate 3D segmentations from minimal input, PAM lowers reliance on manual annotation and task-specific training, providing an efficient and generalizable tool for automated clinical imaging.