The paper presents a computational model of language in which linguistic abilities evolve in organisms that interact with an environment. Each individual's behavior is controlled by a neural network and we study the consequences in the network's internal functional organization of learning to process different classes of words. Agents are selected for reproduction according to their ability to manipulate objects and to understand nouns (objects' names) and verbs (manipulation tasks). The weights of the agents' neural networks are evolved using a genetic algorithm. Synthetic brain imaging techniques are then used to examine the functional organization of the neural networks. Results show that nouns produce more integrated neural activity in the sensory-processing hidden layer, while verbs produce more integrated synaptic activity in the layer where sensory information is integrated with proprioceptive input. Such findings are qualitatively compared with human brain imaging data that indicate that nouns activate more the posterior areas of the brain related to sensory and associative processing, while verbs activate more the anterior motor areas.