Abnormal placental grading is associated with poor pregnancy outcome. The aim of this study was to measure intra- and interobserver variability in placental grading. Five expert sonographers independently graded 90 images on two occasions, each viewing separated by 1 week. A number of measures were employed to standardise assessment and minimise potential for variation: prior agreement was established between observers on the classifications for placental grading; a controlled viewing laboratory was used for all viewings; ambient lighting was optimal and monitors were calibrated to the GSDF standard. Kappa (κ) analysis was used to measure observer agreement. Substantial variations between individuals' scores were observed. A mean κ-value of 0.34 (range from 0.19 to 0.50) indicated fair interobserver agreement over the two occasions and only nine of the 90 images were graded the same by all five observers. Intraobserver agreement had a moderate mean κ-value of 0.52, with individual comparisons ranging from 0.45 to 0.66. This study demonstrates that, despite standardised viewing conditions, Grannum grading of the placenta is not a reliable technique even among expert observers. The need for new methods to assess placental health is required and work is ongoing to develop 2D and 3D software-based methods.