Estimating the genetic variance available for traits informs us about a population's ability to evolve in response to novel selective challenges. In selfing species, theory predicts a loss of genetic diversity that could lead to an evolutionary dead-end, but empirical support remains scarce. Genetic variability in a trait is estimated by correlating the phenotypic resemblance with the proportion of the genome that two relatives share identical by descent ('realized relatedness'). The latter is traditionally predicted from pedigrees (Φ A : expected value) but can also be estimated using molecular markers (average number of alleles shared). Nevertheless, evolutionary biologists, unlike animal breeders, remain cautious about using marker-based relatedness coefficients to study complex phenotypic traits in populations. In this paper, we review published results comparing five different pedigree-free methods and use simulations to test individual-based models (hereafter called animal models) using marker-based relatedness coefficients, with a special focus on the influence of mating systems. Our literature review confirms that Ritland's regression method is unreliable, but suggests that animal models with marker-based estimates of relatedness and genomic selection are promising and that more testing is required. Our simulations show that using molecular markers instead of pedigrees in animal models seriously worsens the estimation of heritability in outcrossing populations, unless a very large number of loci is available. In selfing populations the results are less biased. More generally, populations with high identity disequilibrium (consanguineous or bottlenecked populations) could be propitious for using marker-based animal models, but are also more likely to deviate from the standard assumptions of quantitative genetics models (non-additive variance).