Background: The relationships between scan duration, signal-to-noise ratio (SNR) and sample size must be considered and understood to design optimal GABA-edited magnetic resonance spectroscopy (MRS) studies.
New method: Simulations investigated the effects of signal averaging on SNR, measurement error and group-level variance against a known ground truth. Relative root mean square errors (measurement error) and coefficients of variation (group-level variance) were calculated. GABA-edited data from 18 participants acquired from five voxels were used to examine the relationships between scan duration, SNR and quantitative outcomes in vivo. These relationships were then used to determine the sample sizes required to observe different effect sizes.
Results: In both simulated and in vivo data, SNR increased with the square root of the number of averages. Both measurement error and group-level variance were shown to follow an inverse-square-root function, indicating no significant impact of cumulative artifacts. Comparisons between the first two-thirds of the data and the full dataset showed no statistical difference in group-level variance. There was, however, some variability across the five voxels depending on SNR, which impacted the sample sizes needed to detect group differences in specific brain regions.
Comparison with existing methods: Typical scan durations can be reduced if taking into account a statistically acceptable amount of variance and the magnitudes of predicted effects.
Conclusions: While scan duration in GABA-edited MRS has typically been considered in terms of SNR, it is more appropriate to think in terms of the amount of measurement error and group-level variance that provides sufficient statistical power.
Keywords: Edited MRS; GABA; Quantification; Sample size; Scan duration; Signal-to-noise ratio.
Copyright © 2018 Elsevier B.V. All rights reserved.