In animation generation combining paintings and live-action footage, the core challenge for cross-modal visual content creation lies in effectively coordinating feature discrepancies across multimodal data while achieving adaptive enhancement of key information. This study proposes a deep learning model based on the channel attention mechanism (CAM). It systematically addresses cross-modal fusion difficulties between artistic semantic features and live-action visual information through a dual-path feature fusion framework and dynamic weight allocation strategy. The study employs an encoder-decoder architecture where Residual Network-50 (ResNet-50) extracts multimodal features, channel attention modules perform importance weighting on feature channels, and Transformer decoders generate animation sequences. The framework incorporates feature alignment loss functions and dynamic weight ablation experiments to strengthen cross-modal feature integration capabilities. Experimental results demonstrate that the model achieves an 11.0% improvement in peak signal-to-noise ratio and an 8.8% enhancement in structural similarity index on the custom test set. This confirms high reconstruction accuracy at both pixel and structural levels. The Fréchet Inception Distance decreases by 23.4%. Meanwhile, multimodal fusion degree and cross-modal feature dynamic coupling index increase by 21.9% and 42.3% respectively, significantly optimizing feature distribution consistency and modal interaction efficiency. Ablation studies reveal that the CAM improves model performance by approximately 15%, and the dynamic weight strategy effectively enhances robustness against parameter perturbations. This study provides both theoretical foundations and technical solutions for virtual-real fusion in digital art creation, advancing animation generation toward intelligent and artistic development.
Keywords: Animation generation; Attention mechanism; Cross-modal; Live-action footage; Painting.
© 2025. The Author(s).