DISNET: Distributed Micro-Split Deep Learning in Heterogeneous Dynamic IoT

Samikwa, Eric; Di Maio, Antonio; Braun, Torsten (2023). DISNET: Distributed Micro-Split Deep Learning in Heterogeneous Dynamic IoT. IEEE internet of things journal, 11(1), p. 1. IEEE 10.1109/JIOT.2023.3313514

[img] Text
DiSNet__Distributed_Micro_Split_Deep_Learning.pdf - Submitted Version
Restricted to registered users only
Available under License Publisher holds Copyright.

Download (663kB) | Request a copy
[img] Text
FINAL_VERSION.pdf - Published Version
Restricted to registered users only
Available under License Publisher holds Copyright.

Download (3MB) | Request a copy

The key impediments to deploying deep neural networks (DNN) in IoT edge environments lie in the gap between the expensive DNN computation and the limited computing capability of IoT devices. Current state-of-the-art machine learning models have significant demands on memory, computation, and energy and raise challenges for integrating them with the decentralized operation of heterogeneous and resource-constrained IoT devices. Recent studies have proposed the cooperative execution of DNN models in IoT devices to enhance the reliability, privacy, and efficiency of intelligent IoT systems but disregarded flexible finegrained model partitioning schemes for optimal distribution of DNN execution tasks in dynamic IoT networks. In this paper, we propose DISNET, a distributed micro-split deep learning scheme for heterogeneous dynamic IoT. DISNET accelerates inference time and minimizes energy consumption by combining vertical (layer-based) and horizontal DNN partitioning to enable flexible, distributed, and parallel execution of neural network models on heterogeneous IoT devices. DISNET considers the IoT devices’ computing and communication resources and the network conditions for resource-aware cooperative DNN Inference. Experimental evaluation in dynamic IoT networks shows that DISNET reduces the DNN inference latency and energy consumption by up to 5.2× and 6×, respectively, compared to two state-of-the-art schemes without loss of accuracy.

Item Type:

Journal Article (Original Article)

Division/Institute:

08 Faculty of Science > Institute of Computer Science (INF) > Communication and Distributed Systems (CDS)
08 Faculty of Science > Institute of Computer Science (INF)

UniBE Contributor:

Samikwa, Eric, Di Maio, Antonio, Braun, Torsten

Subjects:

000 Computer science, knowledge & systems
500 Science > 510 Mathematics
500 Science

ISSN:

2327-4662

Publisher:

IEEE

Language:

English

Submitter:

Dimitrios Xenakis

Date Deposited:

11 May 2023 14:49

Last Modified:

12 Jan 2024 07:04

Publisher DOI:

10.1109/JIOT.2023.3313514

Related URLs:

Uncontrolled Keywords:

Distributed Machine Learning; Edge Computing; Internet of things; Micro-Split Deep Learning

BORIS DOI:

10.48350/182488

URI:

https://boris.unibe.ch/id/eprint/182488

Actions (login required)

Edit item Edit item
Provide Feedback