A simplified model relating signal intensity in an MR image to spin-lattice relaxation time (T1), repetition time (TR), number of signal averages and the average tip angle (-alpha) of the protons within the slice has been developed. This model has been used to select the optimal repetition times of two spin-echo images for a fixed total imaging time to maximize signal to noise in calculated T1 images. Theoretical predictions of T1 are virtually identical to spectroscopically measured values, and the relative noise (delta T1) in T1 images calculated from two measured spin-echo images is in good agreement with the theoretically predicted values of delta T1/T1. This model predicts that: (a) for a T1 of approximately 500 ms, the least T1 image noise is obtained with one of the spin-echo images collected with a TR of 400-500 ms. The longer the TR of the other spin-echo image, the lower the T1 image noise, but past a TR of approximately 1400 ms, T1 image signal/noise is optimized for the same total imaging time by increasing the number of averages in the shorter TR spin-echo image rather than increasing the TR of the second spin-echo image. (b) The error is reduced and the optimum TR1 is reduced as -alpha is increased from 63 to 90 degrees. (c) For a range of T1, optimal selection of TR1 and TR2 based on an intermediate value of T1, results in relatively little increase over optimal values in delta T1/T1 for the entire T1 range.