计算物理 ›› 2023, Vol. 40 ›› Issue (6): 742-751.DOI: 10.19596/j.cnki.1001-246x.8684

•   • 上一篇    下一篇

改进Unet网络对叠后地震数据的断层识别

刘贵鑫(), 马中华()   

  1. 天津职业技术师范大学, 天津 300350
  • 收稿日期:2022-12-19 出版日期:2023-11-25 发布日期:2024-01-22
  • 通讯作者: 马中华
  • 作者简介:刘贵鑫(1997-), 女, 硕士研究生, 主要研究方向为高性能计算与并行计算, E-mail: 0221201012@tute.edu.cn
  • 基金资助:
    天津市教委科研计划项目(2020KJ115)

Fault Identification of Post Stack Seismic Data by Improved Unet Network

Guixin LIU(), Zhonghua MA()   

  1. Tianjin University of Technology and Education, Tianjin 300350, China
  • Received:2022-12-19 Online:2023-11-25 Published:2024-01-22
  • Contact: Zhonghua MA

摘要:

为了提高断层识别的准确率, 提出改进Unet模型。为编码器部分设计一种多分支的并联结构M-block(Multi-branch block), 它可以捕获多尺度上下文信息, 并且多分支的并联结构会带来高性能收益。在解码器部分加入Self-Attention块和注意力门控机制。Self-Attention通过对输入特征上下文的加权平均操作, 不仅使注意力模块能够灵活地关注图像的不同区域, 而且弥补了CNN(Convolutional Neural Network)网络局部性的缺点, 为神经网络带来更多的可能性。通过合成数据和实际数据证实, 该模型将传统卷积中的权值共享优点和Self-Attention动态计算注意力权重的优点结合, 提高了断层识别的精度, 与Unet相比, 验证损失下降了33.68%。模型不仅准确识别出了断层特征, 且比目前流行的深度学习方法更准确。

关键词: 断层识别, 多分枝, Self-Attention, Unet, 编码器

Abstract:

In order to improve the accuracy of fault identification, an improved Unet model is proposed. A multi-branch parallel structure M-block (Multi-branch block) is designed for the encoder part. It can capture multi-scale context information, and multi branch parallel structure will bring high performance benefits. Self-Attention block and attention gating mechanism are added to the decoder. Self-Attention not only enables the attention module to flexibly focus on different areas of the image, but also makes up for the shortcomings of the local CNN (Convolutional Neural Network) and brings more possibilities to the neural network through the weighted average operation of the input feature context. It is verified by synthetic data and actual data that the model combines the advantages of weight sharing in traditional convolution with the advantages of Self Attention's dynamic calculation of Attention weight to improve the accuracy of fault identification. Compared with Unet, the test loss is reduced by 33.68%. The model not only identifies fault features accurately, but also is more accurate than the current popular depth learning method.

Key words: fault identification, many branches, self-attention, Unet, encoder

中图分类号: