A Machine Learning Approach to Perfusion Imaging With Dynamic Susceptibility Contrast MR.

McKinley, Richard; Hung, Fan; Wiest, Roland; Liebeskind, David S; Scalzo, Fabien (2018). A Machine Learning Approach to Perfusion Imaging With Dynamic Susceptibility Contrast MR. Frontiers in neurology, 9(717), p. 717. Frontiers Media S.A. 10.3389/fneur.2018.00717

[img]
Preview
Text
fneur-09-00717.pdf - Published Version
Available under License Creative Commons: Attribution (CC-BY).

Download (3MB) | Preview

Dynamic susceptibility contrast (DSC) MR perfusion is a frequently-used technique for neurovascular imaging. The progress of a bolus of contrast agent through the tissue of the brain is imaged via a series of T2-weighted MRI scans. Clinically relevant parameters such as blood flow and Tmax can be calculated by deconvolving the contrast-time curves with the bolus shape (arterial input function). In acute stroke, for instance, these parameters may help distinguish between the likely salvageable tissue and irreversibly damaged infarct core. Deconvolution typically relies on singular value decomposition (SVD): however, studies have shown that these algorithms are very sensitive to noise and artifacts present in the image and therefore may introduce distortions that influence the estimated output parameters. In this work, we present a machine learning approach to the estimation of perfusion parameters in DSC-MRI. Various machine learning models using as input the raw MR source data were trained to reproduce the output of an FDA approved commercial implementation of the SVD deconvolution algorithm. Experiments were conducted to determine the effect of training set size, optimal patch size, and the effect of using different machine-learning models for regression. Model performance increased with training set size, but after 5,000 samples (voxels) this effect was minimal. Models inferring perfusion maps from a 5 by 5 voxel patch outperformed models able to use the information in a single voxel, but larger patches led to worse performance. Random Forest models produced had the lowest root mean squared error, with neural networks performing second best: however, a phantom study revealed that the random forest was highly susceptible to noise levels, while the neural network was more robust. The machine learning-based approach produces estimates of the perfusion parameters invariant to the noise and artifacts that commonly occur as part of MR acquisition. As a result, better robustness to noise is obtained, when evaluated against the FDA approved software on acute stroke patients and simulated phantom data.

Item Type:

Journal Article (Original Article)

Division/Institute:

04 Faculty of Medicine > Department of Radiology, Neuroradiology and Nuclear Medicine (DRNN) > Institute of Diagnostic and Interventional Neuroradiology

UniBE Contributor:

McKinley, Richard and Wiest, Roland

Subjects:

600 Technology > 610 Medicine & health

ISSN:

1664-2295

Publisher:

Frontiers Media S.A.

Language:

English

Submitter:

Martin Zbinden

Date Deposited:

24 Sep 2018 15:05

Last Modified:

30 Sep 2018 02:30

Publisher DOI:

10.3389/fneur.2018.00717

PubMed ID:

30233482

Uncontrolled Keywords:

machine learning penumbra perfusion reperfusion stroke

BORIS DOI:

10.7892/boris.120084

URI:

https://boris.unibe.ch/id/eprint/120084

Actions (login required)

Edit item Edit item
Provide Feedback