Deep Neural Networks (DNNs) have been utilized in various applications
ranging from image classification and facial recognition to medical imagery
analysis and real-time object detection. As our models become more
sophisticated and complex, the computational cost of training such models
becomes a burden for small companies and individuals; for this reason,
outsourcing the training process has been the go-to option for such users.
Unfortunately, outsourcing the training process comes at the cost of
vulnerability to backdoor attacks. These attacks aim at establishing hidden
backdoors in the DNN such that the model performs well on benign samples but
outputs a particular target label when a trigger is applied to the input.
Current backdoor attacks rely on generating triggers in the image/pixel domain;
however, as we show in this paper, it is not the only domain to exploit and one
should always “check the other doors”. In this work, we propose a complete
pipeline for generating a dynamic, efficient, and invisible backdoor attack in
the frequency domain. We show the advantages of utilizing the frequency domain
for establishing undetectable and powerful backdoor attacks through extensive
experiments on various datasets and network architectures. The backdoored
models are shown to break various state-of-the-art defences. We also show two
possible defences that succeed against frequency-based backdoor attacks and
possible ways for the attacker to bypass them. We conclude the work with some
remarks regarding a network’s learning capacity and the capability of embedding
a backdoor attack in the model.

By admin