Concept
Evasion Attacks 0
Evasion attacks are a type of adversarial attack where an attacker subtly manipulates input data to deceive machine learning models at inference time, leading to incorrect predictions or classifications without altering the model itself. These attacks exploit the model's vulnerabilities, highlighting the need for robust defense mechanisms and secure model architectures to ensure reliable AI deployment.
Relevant Degrees