WaveSNet: Wavelet Integrated Deep Networks for Image Segmentation

Qiufu Li, Linlin Shen

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

4 Citations (Scopus)


In deep networks, the lost data details significantly degrade the performances of image segmentation. In this paper, we propose to apply Discrete Wavelet Transform (DWT) to extract the data details during feature map down-sampling, and adopt Inverse DWT (IDWT) with the extracted details during the up-sampling to recover the details. On the popular image segmentation networks, U-Net, SegNet, and DeepLabV3+, we design wavelet integrated deep networks for image segmentation (WaveSNets). Due to the effectiveness of the DWT/IDWT in processing data details, experimental results on CamVid, Pascal VOC, and Cityscapes show that our WaveSNets achieve better segmentation performances than their vanilla versions.

Original languageEnglish
Title of host publicationPattern Recognition and Computer Vision - 5th Chinese Conference, PRCV 2022, Proceedings
EditorsShiqi Yu, Jianguo Zhang, Zhaoxiang Zhang, Tieniu Tan, Pong C. Yuen, Yike Guo, Junwei Han, Jianhuang Lai
PublisherSpringer Science and Business Media Deutschland GmbH
Number of pages13
ISBN (Print)9783031189159
Publication statusPublished - 2022
Event5th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2022 - Shenzhen, China
Duration: 4 Nov 20227 Nov 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13537 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference5th Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2022


  • Deep network
  • Image segmentation
  • Wavelet transform

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science


Dive into the research topics of 'WaveSNet: Wavelet Integrated Deep Networks for Image Segmentation'. Together they form a unique fingerprint.

Cite this