A new method for training recurrent neural network (RNN) has been proposed.By introducing the hidden representation or hidden variables into RNN,training the complicated RNN is decomposed into training a set of single neurons and a linear output layer.Based on linear approximation of RNN hidden units,RNN is remodeled with a "mixture of experts"(ME) model.Morever,training RNN is also changed into a maximum likelihood estimation of the linear systems with hidden variables.Finally,training RNN is fulfilled with the expectation-maximization (EM) algorithm.