Image from Google Jackets

Adversarial Robustness in Covid 19 Classification by Meghna Mariam Alias

By: Contributor(s): Material type: TextTextPublication details: IIT Jodhpur 2023 Department of Computer Science and TechnologyDescription: vii, 14p. HBSubject(s): DDC classification:
  • 006.31 A398A
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Home library Call number Status Date due Barcode Item holds
Thesis Thesis S. R. Ranganathan Learning Hub Reference 006.31 A398A (Browse shelf(Opens below)) Not for loan TM00503
Total holds: 0

The world has seen a health catastrophe of a kind never seen before as a result of the COVID-19 pandemic. As the coronavirus advances, researchers are concerned with devising methods to halt the pandemic and save lives. The effects of the pandemic have been mitigated in part by implementing artificial intelligence (AI). Many deep learning models have been developed over the last three years to diagnose COVID, classifying chest X-ray pictures as NORMAL or COVID-19. Many of these models are accurate. As several studies continue, it is time to analyze how well these models perform when challenged with subtle perturbations (adversarial attacks).

This study investigates how the accuracy of the ResNet18 COVID-19 detection model is impacted by adversarial samples generated by the Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), DeepFool, Stabilized Medical Image Attack (SMIA), and SMIA with Gaussian Blur. The experimental results of this work show that the COVID-19 detection algorithms are susceptible to adversarial attacks, which could be dangerous if utilized to assist in clinical diagnosis. As a result, it remains uncertain how secure machine learning models are.

There are no comments on this title.

to post a comment.