The cut point and detection limit of any immunogenicity assay are two of the most important quantities that define the adequacy of an assay for detecting anti-drug antibodies against therapeutic proteins. To date in the immunogenicity testing literature, only the type I (alpha) error (i.e., the false positive) rate of the assay has been considered for establishing cut points. The "sensitivity" of an immunogenicity assay is usually reported as the concentration of a monoclonal or polyclonal anti-drug antibody standard corresponding to the signal at the cut point. We propose that a more traditional and rigorous analytical chemistry definition of the detection capability be utilized wherein both type I and type II (beta, false negative) error rates are considered. Specifically, the Hubaux-Vos technique of calculating cut points and limits of detection from predication intervals on calibration curves is recommended as a statistically rigorous approach. The utility of using receiver-operator characteristic curves for managing the type I and II error rates of an immunogenicity assay is also presented. In addition, we illustrate how a soluble receptor, sMUC18, for the therapeutic mAb ABX-MA1 can result in false positives by Biacore methodology. This result suggests that immunogenicity confirmatory experiments must be carefully designed, preferably with a smaller type I and II error rate than in the primary screening if an acceptable limit of detection can be maintained.