Platt Scaling is a method used to transform the output of a machine learning model into a probability distribution over classes, typically applied to the outputs of a support vector machine. It involves fitting a logistic regression model to the scores produced by the classifier, thus providing calibrated probabilities that can be more easily interpreted and compared.