[5] Su, J., Vargas, D. V., & Sakurai, K. (2018). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation .
[3] Ilyas, A., Engstrom, L., Athalye, A., & Lin, J. (2019). Black-box adversarial attacks with limited queries and information. ICML . f3arwin
(1) f3arwin requires more computational time than PGD-AT for large models (≈3× training slowdown due to population evaluation). (2) The attack may fail on models with extremely non-smooth decision boundaries where crossover becomes destructive. (3) For very high-dimensional inputs (e.g., 224×224×3), the perturbation search space remains challenging without dimensionality reduction. [5] Su, J