Project details

School of Electrical & Electronic Engineering


Click on [Back] button to go back to previous page


Proj No. A3065-251
Title Robustness of Deep Learning to Adversarial Attacks
Summary Deep learning models are proven to be extremely vulnerable to adversarial attacks, which could mislead them at a high success rate by adding small perturbations. For example, a small change to the pixels of a “panda” image that is indistinguishable from the naked eye can cause deep learning models to wrongly recognize it as a “gibbon”. The potential impact of adversarial attacks has greatly restricted AI applications to safety-critical domains. Understanding the underlying mechanism of adversarial attacks and proposing effective methods to enhance the robustness of deep learning are significant concerns for researchers. In this project, students can choose to explore the knowledge about the generalization mechanism of adversarial examples like "FGSM" to design a more aggressive attack model or investigate defense methods like "Randomized Smoothing" to make deep learning more robust.
Supervisor A/P Jiang Xudong (Loc:S1 > S1 B1C > S1 B1C 105, Ext: +65 67905018)
Co-Supervisor -
RI Co-Supervisor -
Lab Centre for Information Sciences & System (CISS) (Loc: S2-B4b-05)
Single/Group: Single
Area: Digital Media Processing and Computer Engineering
ISP/RI/SMP/SCP?: