
Modeling and Information System in Economics
ISSN 2708-9746
Змагальні атаки на системи машинного зору
Adversarial attacks on machine vision systems
DOI:
10.33111/mise.103.14
Анотація: Застосування глибоких нейронних мереж у комп’ютерному зорі дало змогу досягти значних результатів у вирішенні багатьох задач, таких як класифікація зображень, виявлення об’єктів, семантична сегментація, класифікація відео. Однак глибокі нейронні мережі чутливі до невеликих змін вхідних даних. Стаття присвячена проблемі змагальних атак на системи машинного зору, які можуть негативно впливати на надійність та точність таких систем. Проводиться аналіз останніх досліджень і публікацій у цій галузі, описуються основні види атак та захист від них
Abstract: The use of deep neural networks in computer vision has made it possible to achieve significant results in solving many problems, such as image classification, object detection, semantic segmentation, video classification. However, deep neural networks are sensitive to small changes in the input data. The article is devoted to the problem of adversarial attacks on machine vision systems, which can negatively affect the reliability and accuracy of such systems. The article analyzes the latest research and publications in this area, describes the main approaches to attacks and defense against them
Ключові слова: змагальні атаки, системи машинного зору, глибокі нейронні мережі
Key words: dversarial attacks, machine vision systems, deep neural networks
УДК: 004.932:004.89
UDC: 004.932:004.89
To cite paper
In APA style
Pozdniakovych, O. (2023). Adversarial attacks on machine vision systems. Modeling and Information System in Economics, 103, 169-176. http://doi.org/10.33111/mise.103.14
In MON style
Позднякович О.Є. Змагальні атаки на системи машинного зору. Моделювання та інформаційні системи в економіці. 2023. № 103. С. 169-176. http://doi.org/10.33111/mise.103.14 (дата звернення: 11.04.2025).
With transliteration
Pozdniakovych, O. (2023) Zmahalni ataky na systemy mashynnoho zoru [Adversarial attacks on machine vision systems]. Modeling and Information System in Economics, no. 103. pp. 169-176. http://doi.org/10.33111/mise.103.14 [in Ukrainian] (accessed 11 Apr 2025).

Download Paper
106
Views
30
Downloads
0
Cited by
- Szegedy C., Zaremba W., Sutskever I., Bruna J., Erhan D., Goodfellow I., Fergus R. Intriguing properties of neural networks. In: Proceedings of the International Conference on Learning Representations, 2014.
- Carlini N., Wagner D. Towards evaluating the robustness of neural networks. In: Proceedings of the IEEE Symposium on Security and Privacy, 2017.
- The limitations of deep learning in adversarial settings / Nicolas Papernot, Patrick McDaniel, Somesh Jha et al. // 2016 IEEE European symposium on security and privacy (EuroS&P) / IEEE. — 2016. — P. 372– 387.
- Su Jiawei, Vargas Danilo Vasconcellos, Sakurai Kouichi. One pixel attack for fooling deep neural networks // IEEE Transactions on Evolutionary Computation. — 2019. — Vol. 23, no. 5. — P. 828–841.
- Goodfellow I., Shlens J., Szegedy C. Explaining and harnessing adversarial examples. In: Proceedings of the International Conference on Learning Representations, 2015
- Papernot N., McDaniel P., Goodfellow I. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv: 1605.07277, 2016.
- Chen P. Y., Zhang H., Sharma Y., Yi J., Hsieh C. J. Zoo: zeroth order optimization based black-box attacks to deep neural networks without training substitutemodels. In: Proceedings of the 10th ACMWorkshop on Artificial Intelligence and Security, 2017. 176
- Ilyas A., Engstrom L., Athalye A., Lin J. Black-box adversarial attacks with limited queries and information. In: Proceedings of the International Conference on Machine Learning, 2018.
- Hosseini H., Poovendran R. Semantic adversarial examples. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018.
- Hendrycks D., Gimpel K. Early methods for detecting adversarial images. arXiv:1608.00530, 2016.
- Grosse K., Manoharan P., Papernot N., Backes M., McDaniel P. On the (statistical) detection of adversarial examples. arXiv:17, 2017.
- Gretton A., Borgwardt K. M., Rasch M. J., Schölkopf B., Smola A.A kernel two sample test. The Journal of Machine Learning Research, 2012. 13. P. 723–773.
- Gong Z., Wang W., Ku W. S., 2017. Adversarial and clean data are not twins. arXiv: 1704.04960.
- Guo C., Gardner J., You Y., Wilson A. G., Weinberge K., 2019. Simple black-box adversarial attacks. In: Proceedings of the International Conference on Machine Learning.
- Papernot N., McDaniel P., Wu X., Jha S., Swami A., 2016c. Distillation as a defense to adversarial perturbations against deep neural networks. In: Proceedings of the IEEE Symposium on Security and Privacy.
- Song Y., Kim T., Nowozin S., Ermon S., Kushman N., 2018. Pixeldefend: leveraging generative models to understand and defend against adversarial examples. In: Proceedings of the International Conference on Learning Representations.
- Dhillon G. S., Azizzadenesheli K., Lipton Z. C., Bernstein J. D., Kossaifi J., Khanna A., Anandkumar A., 2018. Stochastic activation pruning for robust adversarial defense. In: International Conference on Learning Representations.