Measuring The Robustness of AI Models Against Adversarial Attacks: Thyroid Ultrasound Images Case Study
Citation
Ceyhan, M., Karaarslan, E. (2022). Measuring The Robustness of AI Models Against Adversarial Attacks: Thyroid Ultrasound Images Case Study. Journal of Emerging Computer Technologies, 2(2), 42-47.Abstract
The healthcare industry is looking for ways on using artificial intelligence effectively. Decision support systems use AI (Artificial Intelligence) models that diagnose cancer from radiology images. These models in such implementations are not perfect, and the attackers can use techniques to make the models give wrong predictions. It is necessary to measure the robustness of these models after an adversarial attack. The studies in the literature focus on models trained with images obtained from different regions (lung x-ray and skin dermoscopy images) and shooting techniques. This study focuses on thyroid ultrasound images as a use case. We trained these images with VGG19, Xception, ResNet50V2, and EfficientNetB2 CNN models. The aim is to make these models make false predictions. We used FGSM, BIM, and PGD techniques to generate adversarial images. The attack resulted in misprediction with 99%. Future work will focus on making these models more robust with adversarial training.
Source
Journal of Emerging Computer TechnologiesVolume
2Issue
2URI
https://dergipark.org.tr/en/pub/ject/issue/72547/1194541https://hdl.handle.net/20.500.12809/10443