Deep Learning-based Prior- and Segmentaion- Free Partial Volume Correction in 99mTc-TRODAT-1 SPECT for Parkinson’s Disease (2024)

Main menu

User menu

Search

  • Advanced search

Advanced Search

Meeting ReportPhysics, Instrumentation & Data Sciences - Image Generation

Haiyan Wang, Wenbo Huang, Yibin Liu, Yu Du, Dong Liang, Hairong Zheng, Zhanli Hu and Greta Mok

Journal of Nuclear Medicine June 2024, 65 (supplement 2) 241036;

  • Article

Abstract

241036

Introduction: 99mTc-TRODAT-1 SPECT is a competent imaging method for the early detection of Parkinson's disease (PD). However, SPECT images acquired from current general-purpose SPECT scanners suffer from severe partial volume effect, which impair tissue boundary clarity and subsequent quantification accuracy. Current partial volume correction (PVC) requires prior information like SPECT system resolution and segmented CT or MR maps. This work proposes a prior and segmentation-free deep learning (DL)-based PVC method for 99mTc-TRODAT-1 PD SPECT.

Methods: A population of 227 99mTc-TRODAT-1 digital brain phantoms (38 normal controls (NC), 103 PD stage 1 , 67 PD stage 2, and 19 PD stage 3 according to the Hoehn & Yahr Stage) are used in this study, modelling different anatomical variations, e.g., cerebrospinal fluid, skull, grey/white matter background and 4 striatal compartments (left caudate (LC), right caudate (RC), left putamen (LP) and right putamen (RP)). Realistic noisy projections are generated using the SIMIND Monte Carlo program, modelling a conventional dual-head NaI SPECT system with a low-energy high-resolution parallel-hole collimator, and then reconstructed with compensation for attenuation, scatter and collimator detector response, with a matrix size of 128×128×128 and a voxel size of 2.7×2.7×2.7 mm3. Reconstructed SPECT images and the corresponding phantoms are paired for supervised learning and split into training, validation, and testing groups (160:22:45), maintaining a similar proportion among different PD stages and NC in each group. We implement a 3D attention-based conditional generative adversarial network (Att-cGAN), a standard 3D cGAN and a 3D U-Net using Pytorch with the Adam optimizer running for 300 epochs. Traditional PVC including iterative deconvolution-based Van-Cittert (VC) and the anatomical prior-guided iterative Yang (IY) methods are also implemented for comparison. The point-spread function input for these 2 PVC is set to 16.07 mm according to the system resolution, while the phantom brain masks are used for IY. Physical metrics of peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) and root mean square error (RMSE), and clinical metrics of striatal binding ratio (SBR) and asymmetry index (ASI) are analyzed.

Results: DL methods all yield outstanding PVC performance, well separating 4 striatal compartments and recovering similar uptake with the phantom. VC only enhances the striatal uptake and cannot retrieve 4 striatal compartments, while IY can do so but over-estimates the activity compared to the phantom. In addition, Att-cGAN achieves better PSNR (29.809 ± 1.736 vs 29.503 ± 1.750 vs 29.347 ± 1.812 vs 27.070 ± 0.995 vs 22.795 ± 0.703 vs 22.611 ± 0.776), SSIM (0.882 ± 0.054 vs 0.879 ± 0.049 vs 0.874 ± 0.051 vs 0.862 ± 0.024 vs 0.556 ± 0.075 vs 0.555 ± 0.048) and RMSE (0.033 ± 0.007 vs 0.034 ± 0.007 vs 0.035 ± 0.008 vs 0.045 ± 0.005 vs 0.073 ± 0.006 vs 0.074 ± 0.007) as compared to cGAN, U-Net, IY and VC and non PVC SPECT for the striatum region. Att-cGAN also obtains generally lower mean absolute error of SBR and ASI than other PVC methods and non-PVC SPECT for the whole striatum and 4 individual compartments.

Conclusions: DL-based prior- and segmentation- free PVC is feasible for 99mTc-TRODAT-1 PD SPECT based on evaluation on highly realistic simulated SPECT data. The proposed Att-cGAN-based method is superior to cGAN, U-Net, standard IY and VC method, both in physical (PSNR, SSIM and RMSE) and clinical metrics (SBR and ASI). Further translation of the proposed DL-based PVC method on real clinical data without ground truth is warranted.

This work is supported by the Science and Technology Development Fund, Macao (FDCT 0016/2023/RIB1), the National Natural Science Foundation of China (82372038), the Shenzhen Excellent Technological Innovation Talent Training Project of China (RCJC20200714114436080), and the Shenzhen Medical Research Funds of China (B2301002).

  • Download figure
  • Open in new tab
  • Download powerpoint

  • Download figure
  • Open in new tab
  • Download powerpoint

Previous

Back to top

In this issue

Journal of Nuclear Medicine

Vol. 65, Issue supplement 2

June 1, 2024

  • Table of Contents
  • Index by author

Article Alerts

Email Article

Citation Tools

  • Facebook Like
  • Google Plus One

Bookmark this article

Jump to section

  • Article

Related Articles

  • No related articles found.

  • Google Scholar

Cited By...

  • No citing articles found.

  • Google Scholar

More in this TOC Section

  • Human Subject Study of a Transmission Source Enhanced Attenuation Correction for Precision PET Neuroimaging

  • PyTomography: Advancements in AI-Based Image Reconstructions

  • Robust contactless dual-gating in cardiac PET/CT imaging using ultra-fast listmode reconstruction with high temporal resolution frames

Similar Articles

Deep Learning-based Prior- and Segmentaion- Free Partial Volume Correction in 99mTc-TRODAT-1 SPECT for Parkinson’s Disease (2024)

References

Top Articles
Latest Posts
Article information

Author: Maia Crooks Jr

Last Updated:

Views: 6286

Rating: 4.2 / 5 (63 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Maia Crooks Jr

Birthday: 1997-09-21

Address: 93119 Joseph Street, Peggyfurt, NC 11582

Phone: +2983088926881

Job: Principal Design Liaison

Hobby: Web surfing, Skiing, role-playing games, Sketching, Polo, Sewing, Genealogy

Introduction: My name is Maia Crooks Jr, I am a homely, joyous, shiny, successful, hilarious, thoughtful, joyous person who loves writing and wants to share my knowledge and understanding with you.