Automated phenotyping is essential for the creation of large, highly standardized datasets from anatomical imaging data. Such datasets can support large-scale studies of complex traits or clinical studies related to precision medicine or clinical trials. We have developed a method that generates three-dimensional landmark data that meet the requirements of standard geometric morphometric analyses. The method is robust and can be implemented without high-performance computing resources. We validated the method using both direct comparison to manual landmarking on the same individuals and also analyses of the variation patterns and outlier patterns in a large dataset of automated and manual landmark data. Direct comparison of manual and automated landmarks reveals that automated landmark data are less variable, but more highly integrated and reproducible. Automated data produce covariation structure that closely resembles that of manual landmarks. We further find that while our method does produce some landmarking errors, they tend to be readily detectable and can be fixed by adjusting parameters used in the registration and control-point steps. Data generated using the method described here have been successfully used to study the genomic architecture of facial shape in two different genome-wide association studies of facial shape.
Keywords: automated landmarking; automated phenotyping; face; facial imaging; human; morphometrics; phenomics; phenotyping; three-dimensional landmarks.
© 2017 Anatomical Society.