Objectification of intracochlear electrocochleography using machine learning.

Schuerch, Klaus; Wimmer, Wilhelm; Dalbert, Adrian; Rummel, Christian; Caversaccio, Marco; Mantokoudis, Georgios; Weder, Stefan (2022). Objectification of intracochlear electrocochleography using machine learning. Frontiers in neurology, 13, p. 943816. Frontiers Media S.A. 10.3389/fneur.2022.943816

[img]
Preview
Text
fneur-13-943816.pdf - Published Version
Available under License Creative Commons: Attribution (CC-BY).

Download (2MB) | Preview

Introduction

Electrocochleography (ECochG) measures inner ear potentials in response to acoustic stimulation. In patients with cochlear implant (CI), the technique is increasingly used to monitor residual inner ear function. So far, when analyzing ECochG potentials, the visual assessment has been the gold standard. However, visual assessment requires a high level of experience to interpret the signals. Furthermore, expert-dependent assessment leads to inconsistency and a lack of reproducibility. The aim of this study was to automate and objectify the analysis of cochlear microphonic (CM) signals in ECochG recordings.

Methods

Prospective cohort study including 41 implanted ears with residual hearing. We measured ECochG potentials at four different electrodes and only at stable electrode positions (after full insertion or postoperatively). When stimulating acoustically, depending on the individual residual hearing, we used three different intensity levels of pure tones (i.e., supra-, near-, and sub-threshold stimulation; 250-2,000 Hz). Our aim was to obtain ECochG potentials with differing SNRs. To objectify the detection of CM signals, we compared three different methods: correlation analysis, Hotelling's T2 test, and deep learning. We benchmarked these methods against the visual analysis of three ECochG experts.

Results

For the visual analysis of ECochG recordings, the Fleiss' kappa value demonstrated a substantial to almost perfect agreement among the three examiners. We used the labels as ground truth to train our objectification methods. Thereby, the deep learning algorithm performed best (area under curve = 0.97, accuracy = 0.92), closely followed by Hotelling's T2 test. The correlation method slightly underperformed due to its susceptibility to noise interference.

Conclusions

Objectification of ECochG signals is possible with the presented methods. Deep learning and Hotelling's T2 methods achieved excellent discrimination performance. Objective automatic analysis of CM signals enables standardized, fast, accurate, and examiner-independent evaluation of ECochG measurements.

Item Type:

Journal Article (Original Article)

Division/Institute:

04 Faculty of Medicine > Department of Head Organs and Neurology (DKNS) > Clinic of Ear, Nose and Throat Disorders (ENT)
10 Strategic Research Centers > ARTORG Center for Biomedical Engineering Research > ARTORG Center - Hearing Research Laboratory
04 Faculty of Medicine > Department of Radiology, Neuroradiology and Nuclear Medicine (DRNN) > Institute of Diagnostic and Interventional Neuroradiology
10 Strategic Research Centers > ARTORG Center for Biomedical Engineering Research

UniBE Contributor:

Schürch, Klaus, Wimmer, Wilhelm, Rummel, Christian, Caversaccio, Marco, Mantokoudis, Georgios, Weder, Stefan Andreas

Subjects:

600 Technology > 610 Medicine & health
500 Science > 570 Life sciences; biology

ISSN:

1664-2295

Publisher:

Frontiers Media S.A.

Language:

English

Submitter:

Pubmed Import

Date Deposited:

20 Sep 2022 08:59

Last Modified:

05 Dec 2022 16:24

Publisher DOI:

10.3389/fneur.2022.943816

PubMed ID:

36105773

Uncontrolled Keywords:

ECochG Hotelling's T2 cochlear implant correlation analysis deep learning electroacoustic stimulation residual hearing signal processing

BORIS DOI:

10.48350/173052

URI:

https://boris.unibe.ch/id/eprint/173052

Actions (login required)

Edit item Edit item
Provide Feedback