The estimation of HIV incidence from cross-sectional surveys using tests for recent infection has attracted much interest. It is increasingly recognized that the lack of high performance recent infection tests is hindering the implementation of this surveillance approach. With growing funding opportunities, test developers are currently trying to fill this gap. However, there is a lack of consensus and clear guidance for developers on the evaluation and optimization of candidate tests. A fundamental shift from conventional thinking about test performance is needed: away from metrics relevant in typical public health settings where the detection of a condition in individuals is of primary interest (sensitivity, specificity, and predictive values) and toward metrics that are appropriate when estimating a population-level parameter such as incidence (accuracy and precision). The inappropriate use of individual-level diagnostics performance measures could lead to spurious assessments and suboptimal designs of tests for incidence estimation. In some contexts, such as population-level application to HIV incidence, bias of estimates is essentially negligible, and all that remains is the maximization of precision. The maximization of the precision of incidence estimates provides a completely general criterion for test developers to assess and optimize test designs. Summarizing the test dynamics into the properties relevant for incidence estimation, high precision estimates are obtained when (1) the mean duration of recent infection is large, and (2) the false-recent rate is small. The optimal trade-off between these two test properties will produce the highest precision, and therefore the most epidemiologically useful incidence estimates.