Humans can learn to organize many kinds of domains into categories, including real-world domains such as kinsfolk and synthetic domains such as sets of geometric figures that vary along several dimensions. Psychologists have studied many individual domains in detail, but there have been few attempts to characterize or explore the full space of possibilities. This article provides a formal characterization that takes objects, features, and relations as primitives and specifies conceptual domains by combining these primitives in different ways. Explaining how humans are able to learn concepts within all of these domains is a challenge for computational models, but I argue that this challenge can be met by models that rely on a compositional representation language such as predicate logic. The article presents such a model and demonstrates that it accounts well for human concept learning across 11 different domains.