Transferability of adversarial examples refers to the phenomenon where adversarial inputs crafted to fool one machine learning model can often fool another model, even if the models have different architectures or are trained on different datasets. This property raises significant security concerns as it challenges the robustness of machine learning systems in real-world applications.