Neuronal representations of external events are often distributed across large populations of cells. We study the effect of correlated noise on the accuracy of these neuronal population codes. Our main question is whether the inherent error in the population code can be suppressed by increasing the size of the population N in the presence of correlated noise. We address this issue using a model of a population of neurons that are broadly tuned to an angular variable in two dimensions. The fluctuations in the neuronal activities are modeled as Gaussian noises with pairwise correlations that decay exponentially with the difference between the preferred angles of the correlated cells. We assume that the system is broadly tuned, which means that both the correlation length and the width of the tuning curves of the mean responses span a substantial fraction of the entire system length. The performance of the system is measured by the Fisher information (FI), which bounds its estimation error. By calculating the FI in the limit of a large N, we show that positive correlations decrease the estimation capability of the network, relative to the uncorrelated population. The information capacity saturates to a finite value as the number of cells in the population grows. In contrast, negative correlations substantially increase the information capacity of the neuronal population. These results are supplemented by the effect of correlations on the mutual information of the system. Our analysis provides an estimate of the effective number of statistically independent degrees of freedom, denoted N(eff), that a large correlated system can have. According to our theory N(eff) remains finite in the limit of a large N. Estimating the parameters of the correlations and tuning curves from experimental data in some cortical areas that code for angles, we predict that the number of effective degrees of freedom embedded in localized populations in these areas is less than or of the order of approximately 10(2).