Automatic dual-modality breast tumor segmentation in PET/CT images using CT-guided transformer

  • Huizhong Zheng
  • , Dan Shao
  • , Zhenxing Huang
  • , Yongfeng Yang
  • , Hairong Zheng
  • , Dong Liang
  • , Yuan Yao
  • , Xiangjian He
  • , Zhanli Hu

Research output: Journal PublicationArticlepeer-review

1 Citation (Scopus)

Abstract

Background: Breast tumor segmentation is crucial for the diagnosis of breast cancer, as it enables radiologists to rapidly identify areas of interest and facilitate subsequent analysis, diagnosis, and treatment. Present breast tumor segmentation methods are typically applied to high-resolution computed tomography images. However, fewer segmentation methods are utilized for positron emission tomography/computed tomography (PET/CT) imaging systems. Purpose: Our goal is to develop a deep learning algorithm which combines functional and structural information for breast tumor segmentation in PET/CT images. This can enhance analytical accuracy and speed up the process of obtaining segmentation outcomes, thereby assisting physicians in subsequent patient diagnosis and treatment. Methods: In this study, we explore an automatic image segmentation model to segment breast tumors in PET images. The proposed CT-Guided Transformer modules utilize features of various scales from CT images to generate attention maps for PET features. During the fusion process, effective consensus information is extracted from the features of two different modalities using similarity-based contrastive learning, thus enhancing the segmentation performance. Five evaluation metrics (Jaccard coefficient, Dice score, precision, sensitivity, and Hausdorff distance) are utilized to evaluate segmentation performance. The proposed algorithm is compared to the single-modality method and other multimodal fusion strategies. Results: Experiments are conducted using a collected clinical breast dataset alongside the public QIN-Breast benchmark, and the results show that the proposed algorithm accurately segments the outline of breast tumors, achieving superior performance (86.19% Dice, 75.73% Jaccard) on the primary dataset and outperforming standard cross-attention on the public dataset by 3.86% Dice and 3.64% Jaccard. The quantitative and visualization results confirm that our method outperformed single-modality input methods and fusion methods. Additionally, we evaluate the distribution of metrics collected by cases to further demonstrate the superiority of our approach. Conclusion: We present a deep-learning-based method for the joint segmentation of anatomical and functional PET/CT images. Compared to single-modality and dual-modality methods with various fusion strategies, our approach significantly improves the accuracy of breast tumor delineation, demonstrating great potential for breast tumor diagnosis.

Original languageEnglish
Article numbere70136
JournalMedical Physics
Volume52
Issue number11
DOIs
Publication statusPublished - Nov 2025

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being

Free Keywords

  • breast tumor segmentation
  • CT-guided transformer
  • PET CT imaging

ASJC Scopus subject areas

  • Biophysics
  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Automatic dual-modality breast tumor segmentation in PET/CT images using CT-guided transformer'. Together they form a unique fingerprint.

Cite this