基于深度神经网络的对抗样本生成算法及其应用研究综述
作者:
作者单位:

作者简介:

鲁溟峰(1978—),男,博士,高级实验师,研究方向为分数域光学测量、人工智能安全;

通讯作者:

中图分类号:

TP183

基金项目:


Survey of adversarial examples algorithms and applications based on deep neural network
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    随着深度学习理论的迅速发展,基于深度神经网络的各项技术得到了广泛的应用,并且在众多应用领域取得了出乎意料的实践效果。然而,近年来大量研究发现,深度神经网络普遍存在天然缺陷,容易受到对抗样本的威胁,严重影响模型使用的安全性,因此对该领域的研究已经成为深度学习相关领域的热门研究方向。研究对抗样本有利于暴露模型存在的问题进而加以防范,对其应用的挖掘有利于为隐私保护等研究提供新思路。为此首先介绍了对抗样本研究领域中的重要概念和术语,然后按照时间从产生机理的角度梳理该领域研究中的基础算法及后续衍生的进阶算法,接着展示了近年来对抗样本所涉及的具体领域及其应用,最后对该领域进一步的研究方向和应用场景进行了展望。

    Abstract:

    With the rapid development of the deep learning theory, various technologies based on deep neural networks have been widely used and have achieved unexpected practical results in many application fields. However, in recent years, a lot of studies have found that deep neural networks generally have natural flaws because they are very easy to be attacked by adversarial examples, which seriously affects the security of the model. Research on this problem has become a hot research direction in the field of deep learning. The research on adversarial examples is conducive to exposing the existing problems of the model and to preventing them, and the mining of its application is helpful in providing new ideas for the research of privacy protection. First, this study introduced important concepts and terms in the field of adversarial example research. Then, it sorted out the basic algorithms and subsequently advanced algorithms in this field from the perspectives of time and generation mechanisms. After that, it showed the specific fields involved in adversarial examples in recent years. Finally, further research directions and application scenarios in this field were prospected.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2022-07-11
  • 出版日期: