Hidden Markov Model (HMM) is a natural and highly robust statistical method for automatic speech recognition. It has been tested and proved considerable in a wide range of applications. The HMM model parameters are used to describe the utterance of the speech segment presented by the HMM. Many successful heuristic algorithms are developed to optimize the model parameters to best describe the training observation sequences. However, all these methods are exploring for only one local maximum in practice. No single method can be recovered from the local maximum and to obtain the global maximum or other more optimized local maxima. In this paper, a stochastic search method called Genetic Algorithm (GA) is presented for HMM training. GA mimics natural evolution and performs global searching within the defined searching space. Experimental results showed that using GA for HMM training (GA-HMM training) can obtain better solutions than using heuristic algorithms. One of the major drawbacks is that GAs require a lot of computation power for global searching before it can be converged. Therefore, in order to outperform heuristic algorithms, a parallel version of GA called Parallel Genetic Algorithm (PGA) is introduced. Experimental results showed that using PGA in speech recognition systems provide 18% improvement in recognition rate with the same amount of computational time. © 1997 IEEE.