Artificial and biological agents are unable to learn given completely random and unstructured data. The structure of data is encoded in the distance or similarity relationships between data points. In the context of neural networks, the neuronal activity within a layer forms a representation reflecting the transformation that the layer implements on its inputs. In order to utilize the structure in the data in a truthful manner, such representations should reflect the input distances and thus be continuous and isometric. Supporting this statement, findings in neuroscience propose that generalization and robustness are tied to neural representations being continuously differentiable. Furthermore, representations of objects have the capacity of being hierarchical. Combined together, these two conditions imply that neural networks need to both preserve the distances between inputs as well as have the capacity to apply cuts at different resolutions, corresponding to different levels of a hierarchy. During cross-entropy classification, the metric and structural properties of network representations are usually broken both between and within classes. To achieve and study this behavior, we train neural networks to perform classification while simultaneously maintaining the metric structure within each class at potentially different levels of a hierarchy, leading to continuous and isometric within-class representations. We show that such network representations turn out to be a beneficial component for making accurate and robust inferences about the world. We come up with a network architecture that facilitates hierarchical manipulation of internal neural representations. We verify that our isometric regularization term improves the robustness to adversarial attacks on MNIST and CIFAR10. Finally, we use toy datasets and show that the learned map is isometric everywhere, except around decision boundaries.
© 2025. The Author(s).