The machine learning models typically applied in medical fields have a complexity that makes it very difficult for humans to comprehend how decisions are made, and an ongoing challenge is to improve their interpretability. Although the saliency maps conventionally used for images can visualize regions highly correlated with discriminatory output, they cannot handle the existence and direction of causality. In the field of causal discovery, there have been attempts to extract causal relationships between features and visualize them as a causal graph, but there is no known method for extracting causal relationships from images in a human-interpretable way. To address this problem, this research proposes a deep causal discovery model that visualizes inherent causal relationships in images. The proposed model uses the L1 norm of the causal matrix as a regularization term in the loss function and adds spatial information to patches obtained from the division of images to visualize causal relationships between these patches in a human-interpretable way in the form of a causal graph. In experiments, the proposed method was applied to a mandibular reconstruction planning database to evaluate the explainability of medical images containing information pertaining to surgical knowledge. The proposed method generated sparse and interpretable causal graphs between spatially related regions and visualized the causal relationships between image features of an individual patient's mandible and surgical planning by surgeons.Clinical Relevance- In this research, we propose a deep causal discovery model for images and apply it to surgical planning images with the aim of visualizing the decision-making process by surgeons and systematizing the diagnosis and treatment process.