Abstract
Computed Tomography (CT) is an advanced imaging technology. To obtain high-resolution (HR) CT images from low-resolution (LR) sinograms, we present a deep-learning (DL) based CT super-resolution (SR) method. The proposed method combines a SR model in the sinogram domain and the iterative framework into a CT SR algorithm. We unrolled the proposed method into a DL network (SRECT-Net) for adaptive estimation of inherent blurring effects causing by the insufficient sampling of LR X-Ray detector. For CT systems, if the scanning protocol is fixed, the system blur effect will remain relatively stable. Inspired by this fact, the proposed methods can be pre-trained with amounts of simulated datasets, effectively fine-tuned with just a single sample, and then obtain a machine-specific SR model. The proposed SRECT was evaluated via SR CT imaging of a Catphan700 phantom and a ham, whose performance was compared to the other DL-based CT SR methods. The results show that the proposed SRECT can provide a CT SR reconstruction performance superior to the other state-of-the-art CT SR methods, demonstrating the potential use in improving CT resolution beyond its hardware limit, lowering the requirement of CT hardware, or reducing X-Ray dose during CT imaging.
Original language | English |
---|---|
Pages (from-to) | 2290-2294 |
Number of pages | 5 |
Journal | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
DOIs | |
Publication status | Published - Mar 2024 |
Event | 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2024 - Seoul, Korea, Republic of Duration: 14 Apr 2024 → 19 Apr 2024 |
Keywords
- Computed Tomography
- Low-cost
- Machine-specific
- Spatial Resolution
ASJC Scopus subject areas
- Software
- Signal Processing
- Electrical and Electronic Engineering