We propose a learning method for hidden Markov models (HMM) for sequence discrimination. When given an HMM, our method sets a function that corresponds to the product of a difference between the observed and the desired likelihoods for each training sequence, and using a gradient descent algorithm, trains the HMM parameters so that the function should be minimized. This method allows us to use not only the examples belonging to a class that should be represented by the HMM, but also the examples not belonging to the class, i.e., negative examples. We evaluated our method in a series of experiments based on a type of cross-validation, and compared the results with those of two existing methods. Experimental results show that our method greatly reduces the discrimination errors made by the other two methods. We conclude that both the use of negative examples and our method of using negative examples are useful for training HMMs in discriminating unknown sequences.