Adversarial attacks pose significant security threats to machine learning models, including graph neural networks (GNNs), by making small perturbations to input data that cause incorrect predictions. This tutorial covers implementing four types of adversarial attacks (FGSM, PGD, Carlini & Wagner, DeepFool) on GNNs using the
Table of contents
Adversarial Attacks in Graph Neural NetworksInstall LibrariesLoad DatasetDefine Graph Neural NetworkSet HyperparametersTrain ModelFGSM AttackPGD AttackCarlini & Wagner (C&W) AttackDeepFool AttackSolution to Adversarial Attacks: Mixed TrainingConclusionSort: