Home
World Journal of Advanced Research and Reviews
International Journal with High Impact Factor for fast publication of Research and Review articles

Main navigation

  • Home
  • Past Issues

Enhancing the security of AI-driven autonomous systems through adversarially robust deep learning models

Breadcrumb

  • Home
  • Enhancing the security of AI-driven autonomous systems through adversarially robust deep learning models

Emmanuel Ayodeji Osoko 1, * Shukurat Opeyemi Rahmon 2 and Muhammed Azeez 3

1 Department of Electrical Engineering and Computer Science, Ohio University, OH, USA.
2 Department of Mathematics, University of Lagos, Akoka, Lagos, Nigeria.
2 Department of Mathematics, Lamar University, Beaumont, TX, USA.\

Research Article
 

World Journal of Advanced Research and Reviews, 2023, 20(01), 1336-1351
Article DOI: 10.30574/wjarr.2023.20.1.2158
DOI url: https://doi.org/10.30574/wjarr.2023.20.1.2158

Received on 13 September 2023; revised on 24 October 2023; accepted on 26 October 2023

Adversarial attacks pose a significant threat to AI-driven autonomous systems by exploiting vulnerabilities in deep learning models, leading to erroneous decision-making in safety-critical applications. This study investigates the effectiveness of adversarial training as a defense mechanism to enhance model robustness against adversarial perturbations. We evaluate multiple deep learning architectures subjected to Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and Carlini & Wagner (CW) attacks, comparing adversarially trained models with standard models in terms of accuracy, robustness, and computational efficiency. The results demonstrate that adversarial training significantly improves resistance to adversarial attacks, reducing attack success rates by over 50% while maintaining high classification performance. However, a trade-off between robustness and inference time was observed, highlighting computational cost concerns. Furthermore, our findings reveal that adversarial robustness partially transfers across architectures but remains susceptible to advanced optimization-based attacks. This study contributes to the development of more secure AI-driven autonomous systems by identifying strengths and limitations of adversarial training, offering insights into future improvements in adversarial defense strategies.

Adversarial Machine Learning; DL Security; Cybersecurity In AI; Neural Network Vulnerabilities

https://wjarr.co.in/sites/default/files/fulltext_pdf/WJARR-2023-2158.pdf

Get Your e Certificate of Publication using below link

Download Certificate

Preview Article PDF

Emmanuel Ayodeji Osoko, Shukurat Opeyemi Rahmon and Muhammed Azeez. Enhancing the security of AI-driven autonomous systems through adversarially robust deep learning models. World Journal of Advanced Research and Reviews, 2023, 20(01), 1336-1351. Article DOI: https://doi.org/10.30574/wjarr.2023.20.1.2158

Copyright © 2023 Author(s) retain the copyright of this article. This article is published under the terms of the Creative Commons Attribution Liscense 4.0

Footer menu

  • Contact

Copyright © 2026 World Journal of Advanced Research and Reviews - All rights reserved

Developed & Designed by VS Infosolution