Cells in culture display diverse motility behaviors that may reflect differences in cell state and function, providing motivation to discriminate between different motility behaviors. Current methods to do so rely upon manual feature engineering. However, the types of features necessary to distinguish between motility behaviors can vary greatly depending on the biological context, and it is not always clear which features may be most predictive in each setting for distinguishing particular cell types or disease states. Convolutional neural networks (CNNs) are machine learning models allowing for relevant features to be learned directly from spatial data. Similarly, recurrent neural networks (RNNs) are a class of models capable of learning long term temporal dependencies. Given that cell motility is inherently spacio-temporal data, we present an approach utilizing both convolutional and long- short-term memory (LSTM) recurrent neural network units to analyze cell motility data. These RNN models provide accurate classification of simulated motility and experimentally measured motility from multiple cell types, comparable to results achieved with hand-engineered features. The variety of cell motility differences we can detect suggests that the algorithm is generally applicable to additional cell types not analyzed here. RNN autoencoders based on the same architecture are capable of learning motility features in an unsupervised manner and capturing variation between myogenic cells in the latent space. Adapting these RNN models to motility prediction, RNNs are capable of predicting muscle stem cell motility from past tracking data with performance superior to standard motion prediction models. This advance in cell motility prediction may be of practical utility in cell tracking applications.