A deep neural network for parametric image reconstruction on a large axial field-of-view PET.

Li, Y; Hu, J; Sari, H; Xue, S; Ma, R; Kandarpa, S; Visvikis, D; Rominger, A; Liu, H; Shi, K (2023). A deep neural network for parametric image reconstruction on a large axial field-of-view PET. European journal of nuclear medicine and molecular imaging, 50(3), pp. 701-714. Springer 10.1007/s00259-022-06003-4

[img] Text
s00259-022-06003-4.pdf - Published Version
Restricted to registered users only
Available under License Publisher holds Copyright.

Download (7MB) | Request a copy

PURPOSE

The PET scanners with long axial field of view (AFOV) having ~ 20 times higher sensitivity than conventional scanners provide new opportunities for enhanced parametric imaging but suffer from the dramatically increased volume and complexity of dynamic data. This study reconstructed a high-quality direct Patlak Ki image from five-frame sinograms without input function by a deep learning framework based on DeepPET to explore the potential of artificial intelligence reducing the acquisition time and the dependence of input function in parametric imaging.

METHODS

This study was implemented on a large AFOV PET/CT scanner (Biograph Vision Quadra) and twenty patients were recruited with 18F-fluorodeoxyglucose (18F-FDG) dynamic scans. During training and testing of the proposed deep learning framework, the last five-frame (25 min, 40-65 min post-injection) sinograms were set as input and the reconstructed Patlak Ki images by a nested EM algorithm on the vendor were set as ground truth. To evaluate the image quality of predicted Ki images, mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) were calculated. Meanwhile, a linear regression process was applied between predicted and true Ki means on avid malignant lesions and tumor volume of interests (VOIs).

RESULTS

In the testing phase, the proposed method achieved excellent MSE of less than 0.03%, high SSIM, and PSNR of ~ 0.98 and ~ 38 dB, respectively. Moreover, there was a high correlation (DeepPET: [Formula: see text]= 0.73, self-attention DeepPET: [Formula: see text]=0.82) between predicted Ki and traditionally reconstructed Patlak Ki means over eleven lesions.

CONCLUSIONS

The results show that the deep learning-based method produced high-quality parametric images from small frames of projection data without input function. It has much potential to address the dilemma of the long scan time and dependency on input function that still hamper the clinical translation of dynamic PET.

Item Type:

Journal Article (Original Article)

Division/Institute:

04 Faculty of Medicine > Department of Radiology, Neuroradiology and Nuclear Medicine (DRNN) > Clinic of Nuclear Medicine

UniBE Contributor:

Hu, Jiaxi, Xue, Song, Rominger, Axel Oliver, Shi, Kuangyu

Subjects:

600 Technology > 610 Medicine & health

ISSN:

1619-7089

Publisher:

Springer

Language:

English

Submitter:

Pubmed Import

Date Deposited:

04 Nov 2022 12:31

Last Modified:

20 Jan 2023 00:14

Publisher DOI:

10.1007/s00259-022-06003-4

PubMed ID:

36326869

Uncontrolled Keywords:

Deep learning Parametric imaging reconstruction Patlak model Total-body PET

BORIS DOI:

10.48350/174488

URI:

https://boris.unibe.ch/id/eprint/174488

Actions (login required)

Edit item Edit item
Provide Feedback