Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio (2018). Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation. Medical image analysis, 44, pp. 228-244. Elsevier 10.1016/j.media.2017.12.009

[img] Text
1-s2.0-S1361841517301901-main.pdf - Published Version
Restricted to registered users only
Available under License Publisher holds Copyright.

Download (4MB)

Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images.

Item Type:

Journal Article (Original Article)

Division/Institute:

04 Faculty of Medicine > Department of Radiology, Neuroradiology and Nuclear Medicine (DRNN) > Institute of Diagnostic and Interventional Neuroradiology
04 Faculty of Medicine > Pre-clinic Human Medicine > Institute for Surgical Technology & Biomechanics ISTB [discontinued]

UniBE Contributor:

Meier, Raphael, McKinley, Richard, Wiest, Roland Gerhard Rudi, Reyes, Mauricio

Subjects:

600 Technology > 610 Medicine & health
500 Science > 570 Life sciences; biology

ISSN:

1361-8415

Publisher:

Elsevier

Language:

English

Submitter:

Martin Zbinden

Date Deposited:

18 Jan 2018 10:40

Last Modified:

02 Mar 2023 23:30

Publisher DOI:

10.1016/j.media.2017.12.009

PubMed ID:

29289703

Uncontrolled Keywords:

Interpretability Machine learning Representation learning

BORIS DOI:

10.7892/boris.108551

URI:

https://boris.unibe.ch/id/eprint/108551

Actions (login required)

Edit item Edit item
Provide Feedback