Studies of neural mechanisms of learning and memory have focused on large changes at identified synapses. However, memory in distributed processing reflexes could involve widely distributed engrams characterized by small changes at every synapse in the network. To investigate this possibility, we used a neural network optimization algorithm to construct distributed engrams for nonassociative conditioning in a model of the local bending reflex of the medicinal leech (Hirudo medicinalis). The model comprised 4 sensory neurons, 10 to 40 interneurons, 8 motor neurons, and up to 480 connections. Synaptic connections in the model were first optimized to reproduce the amplitude and time course of motor neuron synaptic potentials recorded during local bending. This network, which represented the naive state before conditioning, was then reoptimized to the habituated or sensitized state. Following reoptimization, the memory for nonassociative learning was encoded by small changes dispersed across the entire network, and each change made only a small contribution to the learning. Moreover, because the changes were small, resolution of a few tenths of a millivolt, or 3-5% of an average synaptic potential, would be needed to account for half of the nonassociative learning. These results show how difficult distributed engrams can be to detect and provide a likely lower bound on the detectability of nonassociative learning in this and related networks.