Large research projects offer significant advantages for research, but they pose special data quality problems. Data gathered in such projects may contain a greater absolute number of mistakes because of the people collecting data, the complexity of data processing, and the collation required. We wanted to learn from the types and frequencies of errors encroaching on data in a multicenter field trial, and to assess the effects of these errors had they passed through. We used extensive error trapping while processing 688 forms from seven sites in the field trial. Snapshots of the dataset were taken at several points in the process, before and after checking and correcting. We discovered 2.4% of the received data to be mistaken. These errors would have affected the data's reliability, decisions based on the study, and possibly the choice of analysis. Almost all of the mistakes were made at the time of measurement and may be related to raters' perceived importance of the variables. We found that communication and education effectively reduced the number of mistakes and their impact on the study over the course of the field trial. While an estimate of the overall error rate is important, the number of mistakes, in general, is only imperfectly related to the errors' effects on the study's results. Our results also suggest that statistical models that treat mistakes as simple independent events can be misleading.