Discrimination in algorithms occurs when biases in data or design lead to unfair treatment of individuals based on characteristics such as race, gender, or socioeconomic status. This can result in perpetuating existing inequalities and undermining trust in automated systems, necessitating careful evaluation and mitigation strategies to ensure fairness and accountability.