Dynamic Federated Learning for Heterogeneous Learning Environments

Samikwa, Eric; Braun, Torsten (5 May 2023). Dynamic Federated Learning for Heterogeneous Learning Environments. In: Bern Data Science Day. Data Science Lab

[img]
Preview
Text
Eric_BDSD_2023.pdf - Other
Available under License BORIS Standard License.

Download (239kB) | Preview
[img]
Preview
Other (Poster)
poster_BDSD-Eric.pdf - Other
Available under License BORIS Standard License.

Download (521kB) | Preview

The emergence of the Internet of Things (IoT) has resulted in a massive influx of data generated by various edge devices. Machine learning models trained on this data can provide valuable insights and predictions, leading to better decision-making and intelligent applications. Federated Learning (FL) is a distributed learning paradigm that enables remote devices to collaboratively train models without sharing sensitive data, thus preserving user privacy and reducing communication overhead. However, despite recent breakthroughs in FL, the heterogeneous learning environments significantly limit its performance and hinder its real-world applications. The heterogeneous learning environment is mainly embodied in two aspects. Firstly, the statistically heterogeneous (usually non-independent identically distributed) data from geographically distributed clients can deteriorate the FL training accuracy. Secondly, the heterogeneous computing and communication resources in IoT devices often result in unstable training processes that slow down the training of a global model and affect energy consumption. Most existing studies address only the unilateral side of the heterogeneity issue, either the statistical or the resource heterogeneity. However, the resource heterogeneity among various devices does not necessarily correlate with the distribution of their training data. We propose Dynamic Federated Learning (DFL) to address the joint problem of data and resource heterogeneity in FL. DFL combines resource-aware split computing of deep neural networks and dynamic clustering of training participants based on the similarity of their sub-model layers. Using resource-aware split learning, the allocation of the FL training tasks on resource-constrained participants is adjusted to match their heterogeneous computing capabilities, while resource-capable participants carry out the classic FL training. We employ centered kernel alignment for determining the similarity of neural network layers to address the data heterogeneity and carry out layerwise sub-model aggregation. Preliminary results indicate that the proposed technique can improve training performance (i.e., training time, accuracy, and energy consumption) in heterogeneous learning environments with both data and resource heterogeneity.

Item Type:

Conference or Workshop Item (Abstract)

Division/Institute:

08 Faculty of Science > Institute of Computer Science (INF) > Communication and Distributed Systems (CDS)
08 Faculty of Science > Institute of Computer Science (INF)

UniBE Contributor:

Samikwa, Eric, Braun, Torsten

Subjects:

000 Computer science, knowledge & systems
500 Science > 510 Mathematics
500 Science

Publisher:

Data Science Lab

Language:

English

Submitter:

Dimitrios Xenakis

Date Deposited:

11 May 2023 15:11

Last Modified:

28 Aug 2023 15:56

Related URLs:

Uncontrolled Keywords:

Federated Learning; Split Learning; Distributed Machine Learning; Resource and Data Heterogeneity

BORIS DOI:

10.48350/182489

URI:

https://boris.unibe.ch/id/eprint/182489

Actions (login required)

Edit item Edit item
Provide Feedback