000 01791nam a22001937a 4500
082 _a006.31
_bA398A
100 _aAlias, Meghna Mariam
_945330
245 _aAdversarial Robustness in Covid 19 Classification
_cby Meghna Mariam Alias
260 _aIIT Jodhpur
_c2023
_bDepartment of Computer Science and Technology
300 _avii, 14p.
_bHB
500 _aThe world has seen a health catastrophe of a kind never seen before as a result of the COVID-19 pandemic. As the coronavirus advances, researchers are concerned with devising methods to halt the pandemic and save lives. The effects of the pandemic have been mitigated in part by implementing artificial intelligence (AI). Many deep learning models have been developed over the last three years to diagnose COVID, classifying chest X-ray pictures as NORMAL or COVID-19. Many of these models are accurate. As several studies continue, it is time to analyze how well these models perform when challenged with subtle perturbations (adversarial attacks). This study investigates how the accuracy of the ResNet18 COVID-19 detection model is impacted by adversarial samples generated by the Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), DeepFool, Stabilized Medical Image Attack (SMIA), and SMIA with Gaussian Blur. The experimental results of this work show that the COVID-19 detection algorithms are susceptible to adversarial attacks, which could be dangerous if utilized to assist in clinical diagnosis. As a result, it remains uncertain how secure machine learning models are.
650 _aDepartment of Computer Science and Technology
_945331
650 _aFGSM
_945332
650 _aSMIA
_945333
650 _aPGD
_945334
650 _aMTech Theses
_945335
700 _aMisra, Deepak
_945336
942 _cTH
999 _c16568
_d16568