Comprehensive Evaluation of Adversarial Robustness in Deep Learning: Architecture, Diversity, and Defense Analysis

Project Description

Adversarial attacks pose a serious challenge to the reliability and security of deep learning (DL) models. These attacks, often crafted by introducing imperceptible perturbations to input data, can cause models to make incorrect predictions with high confidence. As a result, understanding and mitigating such threats has become a critical area of research in the field of trustworthy AI.

Defenses against adversarial attacks range from input preprocessing and adversarial training to robust model design, yet no single approach has proven universally effective. At InfoLab, Sungkyunkwan University (SKKU), our research group undertakes a systematic investigation into the effectiveness of various adversarial attacks and corresponding defense mechanisms, providing a comprehensive analysis of how deep learning models respond to different threat vectors across multiple architectures and datasets.

Image3

Core Research Themes and Contributions

1. Multi-Dimensional Analysis of Adversarial Attacks and Defenses

Through large-scale experimentation, we investigated how factors such as model complexity, architectural diversity, and training datasets impact robustness against adversarial attacks. We evaluated several white-box and black-box attack methods (e.g., FGSM, PGD, C&W, SimBA, HopSkipJump, MGAAttack, and Boundary Attack) across popular model families (e.g., VGG, ResNet, DenseNet, MobileNet, Inception, Xception, ShuffleNet) and datasets (ImageNet, CIFAR-10, CIFAR-100). Key findings include:

2. Impact of Architectural Variations on Robustness

We performed a controlled analysis on variations of popular DL architectures, including:

3. Behavior of Attacks Across Datasets and Defensive Strategies

We highlighted how dataset characteristics (e.g., resolution, number of classes) affect model behavior under attack. Notably:

Project Objectives

Research Impact

This project provides a systematic and reproducible benchmark for adversarial robustness analysis, offering insights for both researchers and practitioners seeking to deploy secure AI solutions. By exposing previously underexplored relationships between model design, training data, and adversarial behavior, InfoLab at SKKU is helping shape the future of trustworthy and resilient AI systems.