Visual working memory (VWM) refers to the ability to encode, store, and retrieve visual information. The two prevailing theories that describe VWM assume that information is stored either in discrete slots or within a shared pool of resources. However, there is not yet a good understanding of the neural mechanisms that would underlie such theories. To address this gap, we provide a computationally realized neural account that uses a pool of shared neurons to store information about one or more distinct stimuli. The binding pool model is a neural network that is essentially a hybrid of the slot and resource theories. It describes how information can be stored and retrieved from a pool of shared resources using a type/token architecture (Bowman & Wyble in Psychological Review 114(1), 38-70, 2007; Kanwisher in Cognition 27, 117-143, 1987; Mozer in Journal of Experimental Psychology: Human Perception and Performance 15(2), 287-303, 1989). The model can store multiple distinct objects, each containing binding links to one or more features. The binding links are stored in a pool of shared resources and, thus, produce mutual interference as memory load increases. Given a cue, the model retrieves a specific object and then reconstructs other features bound to that object, along with a confidence metric. The model can simulate data from continuous report and change detection paradigms and generates testable predictions about the interaction of report accuracy, confidence, and stimulus similarity. The testing of such predictions will help to identify the boundaries of shared resource theories, thereby providing insight into the roles of ensembles and context in explaining our ability to remember visual information.