Adversarial Robustness in Covid 19 Classification

Alias, Meghna Mariam

Adversarial Robustness in Covid 19 Classification by Meghna Mariam Alias - IIT Jodhpur Department of Computer Science and Technology 2023 - vii, 14p. HB

The world has seen a health catastrophe of a kind never seen before as a result of the COVID-19 pandemic. As the coronavirus advances, researchers are concerned with devising methods to halt the pandemic and save lives. The effects of the pandemic have been mitigated in part by implementing artificial intelligence (AI). Many deep learning models have been developed over the last three years to diagnose COVID, classifying chest X-ray pictures as NORMAL or COVID-19. Many of these models are accurate. As several studies continue, it is time to analyze how well these models perform when challenged with subtle perturbations (adversarial attacks).

This study investigates how the accuracy of the ResNet18 COVID-19 detection model is impacted by adversarial samples generated by the Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), DeepFool, Stabilized Medical Image Attack (SMIA), and SMIA with Gaussian Blur. The experimental results of this work show that the COVID-19 detection algorithms are susceptible to adversarial attacks, which could be dangerous if utilized to assist in clinical diagnosis. As a result, it remains uncertain how secure machine learning models are.


Department of Computer Science and Technology
FGSM
SMIA
PGD
MTech Theses

006.31 / A398A