The objective structured clinical examination (OSCE) now has an established place in the assessment of the medical undergraduate. While much has been written about the reliability of the OSCE, empirical work on the determination of the passing score which represents competence on the OSCE is rarely encountered. If the OSCE is to play its role in the 'high stakes' testing of clinical competence, it is important that this passing score be set reliably and defensibly. This article illustrates how a two-session modified Angoff standard-setting procedure is used to set the passing score on a 14 station Obstetrics and Gynaecology OSCE used to assess final year students at The Queen's University of Belfast. The Angoff methodology harnesses the professional judgement of expert judges to establish defensible standards. Four university teachers, five non-academic consultants and six junior clinical staff took part in a two-session Angoff standard-setting procedure. In the first session, the judges (individually and in silence) used their professional judgement to estimate the score which a minimally competent final year obstetrics and gynaecology student should achieve on each tested element of the OSCE. In the second session they revised their session 1 judgements in the light of the OSCE scores of real students and the opportunity for structured discussion. The passing score for the OSCE is reported together with the statistical measures which assure its reliability.