ML Interpretability: Simple Isn’t Easy

Räz, Tim (2024). ML Interpretability: Simple Isn’t Easy. Studies in history and philosophy of science, 103, pp. 159-167. Elsevier 10.1016/j.shpsa.2023.12.007

[img]
Preview
Text
1-s2.0-S0039368123001723-main.pdf - Published Version
Available under License Creative Commons: Attribution (CC-BY).

Download (1MB) | Preview

The interpretability of ML models is important, but it is not clear what it amounts to. So far, most philosophers have discussed the lack of interpretability of black-box models such as neural networks, and methods such as explainable AI that aim to make these models more transparent. The goal of this paper is to clarify the nature of interpretability by focussing on the other end of the “interpretability spectrum”. The reasons why some models, linear models and decision trees, are highly interpretable will be examined, and also how more general models, MARS and GAM, retain some degree of interpretability. It is found that while there is heterogeneity in how we gain interpretability, what interpretability is in particular cases can be explicated in a clear manner.

Item Type:

Journal Article (Original Article)

Division/Institute:

06 Faculty of Humanities > Department of Art and Cultural Studies > Institute of Philosophy

UniBE Contributor:

Räz, Tim

Subjects:

100 Philosophy
100 Philosophy > 120 Epistemology
600 Technology > 620 Engineering

ISSN:

0039-3681

Publisher:

Elsevier

Funders:

[42] Schweizerischer Nationalfonds

Projects:

[UNSPECIFIED] Improving Interpretability

Language:

English

Submitter:

Tim Räz

Date Deposited:

28 Dec 2023 07:14

Last Modified:

12 Feb 2024 14:17

Publisher DOI:

10.1016/j.shpsa.2023.12.007

PubMed ID:

38176132

BORIS DOI:

10.48350/190648

URI:

https://boris.unibe.ch/id/eprint/190648

Actions (login required)

Edit item Edit item
Provide Feedback