The decays of pitch traces and loudness traces in short-term auditory memory were compared in forced-choice discrimination experiments. The two stimuli presented on each trial were separated by a variable delay (D); they consisted of pure tones, series of resolved harmonics, or series of unresolved harmonics mixed with lowpass noise. A roving procedure was employed in order to minimize the influence of context coding. During an initial phase of each experiment, frequency and intensity discrimination thresholds [P(C) = 0.80] were measured with an adaptive staircase method while D was fixed at 0.5 s. The corresponding physical differences (in cents or dB) were then constantly presented at four values of D: 0.5, 2, 5, and 10 s. In the case of intensity discrimination, performance (d') markedly decreased when D increased from 0.5 to 2 s, but was not further reduced when D was longer. In the case of frequency discrimination, the decline of performance as a function of D was significantly less abrupt. This divergence suggests that pitch and loudness are processed in separate modules of auditory memory.