Quantitative MR imaging with self-supervised deep learning promises fast and robust parameter estimation without the need for training labels. However, previous studies have reported significant bias in self-supervised parameter estimates as the signal-to-noise ratio (SNR) decreases. A possible source of this bias may be the choice of the mean squared error (MSE) loss function for network training, which is incompatible with MR magnitude signals. To address this, we introduce the Rician likelihood loss for self-supervised learning, which explicitly accounts for the distribution of MR magnitude signals during training. We develop a stable and accurate numerical approximation of the negative log Rician (NLR) likelihood loss and compare its performance against the MSE loss using the intravoxel incoherent motion (IVIM) model as an exemplar. Parameter estimation performance was evaluated in simulated data and real data in terms of accuracy, precision and overall error by quantifying the bias, standard deviation and root mean squared error of network predictions against ground truth (or gold standard) values over a range of SNRs. Results show that self-supervised networks trained with the NLR loss have increased accuracy (reduced bias) of IVIM diffusion coefficient at low SNR, at the cost of reduced precision. As SNR increases, the performance of the NLR and MSE losses converges, resulting in estimates with higher accuracy, higher precision and lower total error. The NLR loss has potential for broad application in quantitative MR imaging by enabling more accurate parameter estimation from noisy data. The NLR loss is available as a Python package: https://pypi.org/project/RicianLoss.
Keywords: Rician; deep learning; diffusion MRI; intravoxel incoherent motion; likelihood; mean squared error; quantitative MRI; self‐supervised.
© 2025 John Wiley & Sons Ltd.