Information Gain is a metric used in decision trees to quantify the reduction in entropy or uncertainty after a dataset is split based on an attribute. It helps identify which attribute provides the most useful information for classification, guiding the tree-building process to create more accurate models.
Cost-Sensitive Decision Trees are a variation of decision trees that incorporate the costs associated with different types of classification errors, making them particularly useful for applications where the consequences of false positives and false negatives are significantly different. By integrating cost considerations directly into the model-building process, these trees aim to minimize the total expected cost rather than simply maximizing accuracy.
Classification and Regression Trees (CART) are decision tree frameworks used for predictive modeling, where the tree is built through a process of splitting data points using feature values to create branches. This recursive partitioning continues until a stopping criterion is met, effectively simplifying complex datasets into interpretable models for classification or regression tasks.