TY - JOUR
T1 - TFGNet
T2 - Frequency-guided saliency detection for complex scenes
AU - Wang, Yi
AU - Wang, Ruili
AU - Liu, Juncheng
AU - Xu, Rui
AU - Wang, Tianzhu
AU - Hou, Feng
AU - Liu, Bin
AU - Lei, Na
N1 - Publisher Copyright:
© 2025
PY - 2025/2
Y1 - 2025/2
N2 - Salient object detection (SOD) with accurate boundaries in complex and chaotic natural or social scenes remains a significant challenge. Many edge-aware or/and two-branch models rely on exchanging global and local information between multistage features, which can propagate errors and lead to incorrect predictions. To address this issue, this work explores the fundamental problems in current U-Net architecture-based SOD models from the perspective of image spatial frequency decomposition and synthesis. A concise and efficient Frequency-Guided Network (TFGNet) is proposed that simultaneously learns the boundary details (high-spatial frequency) and inner regions (low-spatial frequency) of salient regions in two separate branches. Each branch utilizes a Multiscale Frequency Feature Enhancement (FFE) module to learn pixel-wise frequency features and a Transformer-based decoder to learn mask-wise frequency features, improving a comprehensive understanding of salient regions. TFGNet eliminates the need to exchange global and local features at intermediate layers of the two branches, thereby reducing interference from erroneous information. A hybrid loss function is also proposed to combine BCE, IoU, and Histogram dissimilarity to ensure pixel accuracy, structural integrity, and frequency distribution consistency between ground truth and predicted saliency maps. Comprehensive evaluations have been conducted on five widely used SOD datasets and one underwater SOD dataset, demonstrating the superior performance of TFGNet compared to state-of-the-art methods. The codes and results are available at https://github.com/yiwangtz/TFGNet.
AB - Salient object detection (SOD) with accurate boundaries in complex and chaotic natural or social scenes remains a significant challenge. Many edge-aware or/and two-branch models rely on exchanging global and local information between multistage features, which can propagate errors and lead to incorrect predictions. To address this issue, this work explores the fundamental problems in current U-Net architecture-based SOD models from the perspective of image spatial frequency decomposition and synthesis. A concise and efficient Frequency-Guided Network (TFGNet) is proposed that simultaneously learns the boundary details (high-spatial frequency) and inner regions (low-spatial frequency) of salient regions in two separate branches. Each branch utilizes a Multiscale Frequency Feature Enhancement (FFE) module to learn pixel-wise frequency features and a Transformer-based decoder to learn mask-wise frequency features, improving a comprehensive understanding of salient regions. TFGNet eliminates the need to exchange global and local features at intermediate layers of the two branches, thereby reducing interference from erroneous information. A hybrid loss function is also proposed to combine BCE, IoU, and Histogram dissimilarity to ensure pixel accuracy, structural integrity, and frequency distribution consistency between ground truth and predicted saliency maps. Comprehensive evaluations have been conducted on five widely used SOD datasets and one underwater SOD dataset, demonstrating the superior performance of TFGNet compared to state-of-the-art methods. The codes and results are available at https://github.com/yiwangtz/TFGNet.
KW - Convolutional neural network
KW - Salient object detection
KW - Spatial frequency
KW - Transformer
UR - http://www.scopus.com/inward/record.url?scp=85214284682&partnerID=8YFLogxK
U2 - 10.1016/j.asoc.2024.112685
DO - 10.1016/j.asoc.2024.112685
M3 - Article
AN - SCOPUS:85214284682
SN - 1568-4946
VL - 170
JO - Applied Soft Computing Journal
JF - Applied Soft Computing Journal
M1 - 112685
ER -