RMSProp is an adaptive learning rate optimization algorithm designed to address the diminishing learning rates of AdaGrad by maintaining a moving average of the squared gradients. This approach allows RMSProp to perform well in non-convex settings, making it suitable for training deep neural networks with large datasets.