A Modular Workflow for Performance Benchmarking of Neuronal Network Simulations.

Albers, Jasper; Pronold, Jari; Kurth, Anno Christopher; Vennemo, Stine Brekke; Haghighi Mood, Kaveh; Patronis, Alexander; Terhorst, Dennis; Jordan, Jakob; Kunkel, Susanne; Tetzlaff, Tom; Diesmann, Markus; Senk, Johanna (2022). A Modular Workflow for Performance Benchmarking of Neuronal Network Simulations. Frontiers in neuroinformatics, 16, p. 837549. Frontiers Research Foundation 10.3389/fninf.2022.837549

[img]
Preview
Text
fninf-16-837549.pdf - Published Version
Available under License Creative Commons: Attribution (CC-BY).

Download (1MB) | Preview

Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detailed anatomical data on brain connectivity. Large-scale models that study interactions between multiple brain areas with intricate connectivity and investigate phenomena on long time scales such as system-level learning require progress in simulation speed. The corresponding development of state-of-the-art simulation engines relies on information provided by benchmark simulations which assess the time-to-solution for scientifically relevant, complementary network models using various combinations of hardware and software revisions. However, maintaining comparability of benchmark results is difficult due to a lack of standardized specifications for measuring the scaling performance of simulators on high-performance computing (HPC) systems. Motivated by the challenging complexity of benchmarking, we define a generic workflow that decomposes the endeavor into unique segments consisting of separate modules. As a reference implementation for the conceptual workflow, we develop beNNch: an open-source software framework for the configuration, execution, and analysis of benchmarks for neuronal network simulations. The framework records benchmarking data and metadata in a unified way to foster reproducibility. For illustration, we measure the performance of various versions of the NEST simulator across network models with different levels of complexity on a contemporary HPC system, demonstrating how performance bottlenecks can be identified, ultimately guiding the development toward more efficient simulation technology.

Item Type:

Journal Article (Original Article)

Division/Institute:

04 Faculty of Medicine > Pre-clinic Human Medicine > Institute of Physiology

UniBE Contributor:

Jordan, Jakob Jürgen

Subjects:

600 Technology > 610 Medicine & health

ISSN:

1662-5196

Publisher:

Frontiers Research Foundation

Language:

English

Submitter:

Pubmed Import

Date Deposited:

02 Jun 2022 15:12

Last Modified:

05 Dec 2022 16:20

Publisher DOI:

10.3389/fninf.2022.837549

PubMed ID:

35645755

Uncontrolled Keywords:

benchmarking high-performance computing large-scale simulation metadata spiking neuronal networks workflow

BORIS DOI:

10.48350/170405

URI:

https://boris.unibe.ch/id/eprint/170405

Actions (login required)

Edit item Edit item
Provide Feedback