Abstract
Thoracic Cone-beam computed tomography (CBCT) is routinely collected during image-guided radiation therapy (IGRT) to provide updated patient anatomy information for lung cancer treatments. However, CBCT images often suffer from streaking artifacts and noise caused by under-rate sampling projections and low-dose exposure, resulting in loss of lung anatomy which contains crucial pulmonary tumorous and functional information. While recent deep learning-based CBCT enhancement methods have shown promising results in suppressing artifacts, they have limited performance on preserving anatomical details containing crucial tumorous information due to lack of targeted guidance. To address this issue, we propose a novel feature-targeted deep learning framework which generates ultra-quality pulmonary imaging from CBCT of lung cancer patients via a multi-task customized feature-to-feature perceptual loss function and a feature-guided CycleGAN. The framework comprises two main components: a multi-task learning feature-selection network (MTFS-Net) for building up a customized feature-to-feature perceptual loss function (CFP-loss); and a feature-guided CycleGan network. Our experiments showed that the proposed framework can generate synthesized CT (sCT) images for the lung that achieved a high similarity to CT images, with an average SSIM index of 0.9747 and an average PSNR index of 38.5995 globally, and an average Pearman's coefficient of 0.8929 within the tumor region on multi-institutional datasets. The sCT images also achieved visually pleasing performance with effective artifacts suppression, noise reduction, and distinctive anatomical details preservation. Functional imaging tests further demonstrated the pulmonary texture correction performance of the sCT images, and the similarity of the functional imaging generated from sCT and CT images has reached an average DSC value of 0.9147, SCC value of 0.9615 and R value of 0.9661. Comparison experiments with pixel-to-pixel loss also showed that the proposed perceptual loss significantly enhances the performance of involved generative models. Our experiment results indicate that the proposed framework outperforms the state-of-the-art models for pulmonary CBCT enhancement. This framework holds great promise for generating high-quality pulmonary imaging from CBCT that is suitable for supporting further analysis of lung cancer treatment.
| Original language | English |
|---|---|
| Article number | 102487 |
| Journal | Computerized Medical Imaging and Graphics |
| Volume | 121 |
| Early online date | 26 Jan 2025 |
| DOIs | |
| Publication status | Published - Apr 2025 |
| Externally published | Yes |
Bibliographical note
Copyright © 2025 The Authors. Published by Elsevier Ltd.. All rights reserved.Funding
This research was partly supported by General Research Fund (15103520), the University Research Committee, Health and Medical Research Fund (07183266, 09200576), Shenzhen Science and Technology Program (JCYJ20230807140403007), PolyU (UGC) RI-IWEAR Seed Project (P0044802), the Health Bureau, The Government of the Hong Kong Special Administrative Region.
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 3 Good Health and Well-being
Keywords
- Cone-beam computed tomography
- Image-to-image translation
- Perceptual loss
- Multi-task learning
- Pulmonary imaging
Fingerprint
Dive into the research topics of 'Feature-targeted deep learning framework for pulmonary tumorous Cone-beam CT (CBCT) enhancement with multi-task customized perceptual loss and feature-guided CycleGAN'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver