Relative entropy, also known as Kullback-Leibler divergence, measures the difference between two probability distributions, providing a way to quantify how one distribution diverges from a reference distribution. It is not symmetric and is not a true metric, but it is widely used in information theory, statistics, and machine learning for tasks such as model evaluation and information gain analysis.