Abstract
Adversarial examples are security risks in the implementation of artificial intelligence (AI) in 6G Consumer Electronics. Deep learning models are highly susceptible to adversarial attacks, and defense against such attacks is critical to the safety of 6G Consumer Electronics. However, there remains a lack of effective defensive mechanisms against adversarial attacks in the realm of deep learning. The primary issue lies in the fact that it is not yet understood how adversarial examples can deceive deep learning models.The potential operation mechanism of adversarial examples has not been fully explored, which constitutes a bottleneck in adversarial attack defense. This paper focuses on causality in adversarial examples such as combining the adversarial attack algorithms with the causal inference methods. Specifically, we will use a variety of adversarial attack algorithms to generate adversarial samples, and analyze the causal relationship between adversarial samples and original samples through causal inference. At the same time, we will compare and analyze the causal effect between them to reveal the mechanism and discover the reason of miscalculating. The expected contributions of this paper include: (1) Reveal the mechanism and influencing factors of counterattack, and provide theoretical support for the security of deep learning models; (2) Propose a defense strategy based on causal inference method to provide a practical method for the defense of deep learning models; (3) Provide new ideas and methods for adversarial attack defense in deep learning models.
Original language | English |
---|---|
Journal | IEEE Transactions on Consumer Electronics |
DOIs | |
Publication status | Published - 26 Aug 2024 |
Keywords
- Adversarial Example
- Adversarial Attack
- Causal Inference
- Causality
- Consumer Electronics
- 6G