Multi-Modal Deep Learning for Automated Assembly of Periapical Radiographs.

Pfänder, L; Schneider, L; Büttner, M; Krois, J; Meyer-Lueckel, H; Schwendicke, F (2023). Multi-Modal Deep Learning for Automated Assembly of Periapical Radiographs. Journal of dentistry, 135, p. 104588. Elsevier 10.1016/j.jdent.2023.104588

[img] Text
1-s2.0-S0300571223001744-main.pdf - Accepted Version
Restricted to registered users only until 20 June 2024.
Available under License Publisher holds Copyright.

Download (1MB) | Request a copy

OBJECTIVES

Periapical radiographs are oftentimes taken in series to display all teeth present in the oral cavity. Our aim was to automatically assemble such a series of periapical radiographs into an anatomically correct status using a multi-modal deep learning model.

METHODS

4,707 periapical images from 387 patients (on average, 12 images per patient) were used. Radiographs were labeled according to their field of view and the dataset split into a training, validation, and test set, stratified by patient. In addition to the radiograph the timestamp of image generation was extracted and abstracted as follows: A matrix, containing the normalized timestamps of all images of a patient was constructed, representing the order in which images were taken, providing temporal context information to the deep learning model. Using the image data together with the time sequence data a multi-modal deep learning model consisting of two residual convolutional neural networks (ResNet-152 for image data, ResNet-50 for time data) was trained. Additionally, two uni-modal models were trained on image data and time data, respectively. A custom scoring technique was used to measure model performance.

RESULTS

Multi-modal deep learning outperformed both uni-modal image-based learning (p<0.001) and time-based learning (p<0.05). The multi-modal deep learning model predicted tooth labels with an F1-score, sensitivity and precision of 0.79, respectively, and an accuracy of 0.99. 37 out of 77 patient datasets were fully correctly assembled by multi-modal learning; in the remaining ones, usually only one image was incorrectly labeled.

CONCLUSIONS

Multi-modal modeling allowed automated assembly of periapical radiographs and outperformed both uni-modal models. Dental machine learning models can benefit from additional data modalities.

CLINICAL SIGNIFICANCE

Like humans, deep learning models may profit from multiple data sources for decision-making. We demonstrate how multi-modal learning can assist assembling periapical radiographs into an anatomically correct status. Multi-modal learning should be considered for more complex tasks, as clinically a wealth of data is usually available and could be leveraged.

Item Type:

Journal Article (Original Article)

Division/Institute:

04 Faculty of Medicine > School of Dental Medicine > Department of Preventive, Restorative and Pediatric Dentistry

UniBE Contributor:

Meyer-Lückel, Hendrik

Subjects:

600 Technology > 610 Medicine & health

ISSN:

1879-176X

Publisher:

Elsevier

Language:

English

Submitter:

Pubmed Import

Date Deposited:

27 Jun 2023 13:12

Last Modified:

17 Jul 2023 00:15

Publisher DOI:

10.1016/j.jdent.2023.104588

PubMed ID:

37348642

Uncontrolled Keywords:

CNNs, Computer Vision Deep Learning Multi-Modal learning Periapical Radiographs Time Series Analysis

BORIS DOI:

10.48350/184073

URI:

https://boris.unibe.ch/id/eprint/184073

Actions (login required)

Edit item Edit item
Provide Feedback