Salimi, Yazdan; Mansouri, Zahra; Hajianfar, Ghasem; Sanaat, Amirhossein; Shiri, Isaac; Zaidi, Habib (2024). Fully automated explainable abdominal CT contrast media phase classification using organ segmentation and machine learning. Medical physics, 51(6), pp. 4095-4104. American Association of Physicists in Medicine AAPM 10.1002/mp.17076
|
Text
Medical_Physics_-_2024_-_Salimi_-_Fully_automated_explainable_abdominal_CT_contrast_media_phase_classification_using_organ.pdf - Published Version Available under License Creative Commons: Attribution-Noncommercial (CC-BY-NC). Download (4MB) | Preview |
BACKGROUND
Contrast-enhanced computed tomography (CECT) provides much more information compared to non-enhanced CT images, especially for the differentiation of malignancies, such as liver carcinomas. Contrast media injection phase information is usually missing on public datasets and not standardized in the clinic even in the same region and language. This is a barrier to effective use of available CECT images in clinical research.
PURPOSE
The aim of this study is to detect contrast media injection phase from CT images by means of organ segmentation and machine learning algorithms.
METHODS
A total number of 2509 CT images split into four subsets of non-contrast (class #0), arterial (class #1), venous (class #2), and delayed (class #3) after contrast media injection were collected from two CT scanners. Seven organs including the liver, spleen, heart, kidneys, lungs, urinary bladder, and aorta along with body contour masks were generated by pre-trained deep learning algorithms. Subsequently, five first-order statistical features including average, standard deviation, 10, 50, and 90 percentiles extracted from the above-mentioned masks were fed to machine learning models after feature selection and reduction to classify the CT images in one of four above mentioned classes. A 10-fold data split strategy was followed. The performance of our methodology was evaluated in terms of classification accuracy metrics.
RESULTS
The best performance was achieved by Boruta feature selection and RF model with average area under the curve of more than 0.999 and accuracy of 0.9936 averaged over four classes and 10 folds. Boruta feature selection selected all predictor features. The lowest classification was observed for class #2 (0.9888), which is already an excellent result. In the 10-fold strategy, only 33 cases from 2509 cases (∼1.4%) were misclassified. The performance over all folds was consistent.
CONCLUSIONS
We developed a fast, accurate, reliable, and explainable methodology to classify contrast media phases which may be useful in data curation and annotation in big online datasets or local datasets with non-standard or no series description. Our model containing two steps of deep learning and machine learning may help to exploit available datasets more effectively.
Item Type: |
Journal Article (Original Article) |
---|---|
Division/Institute: |
04 Faculty of Medicine > Department of Cardiovascular Disorders (DHGE) > Clinic of Cardiology |
UniBE Contributor: |
Shiri Lord, Isaac |
Subjects: |
600 Technology > 610 Medicine & health |
ISSN: |
0094-2405 |
Publisher: |
American Association of Physicists in Medicine AAPM |
Language: |
English |
Submitter: |
Pubmed Import |
Date Deposited: |
18 Apr 2024 14:39 |
Last Modified: |
04 Jun 2024 00:14 |
Publisher DOI: |
10.1002/mp.17076 |
PubMed ID: |
38629779 |
Uncontrolled Keywords: |
contrast‐enhanced CT data curation deep learning machine learning segmentation |
BORIS DOI: |
10.48350/196061 |
URI: |
https://boris.unibe.ch/id/eprint/196061 |