A high-throughput hybrid task and data parallel Poisson solver for large-scale simulations of incompressible turbulent flows on distributed GPUs

Zolfaghari, Hadi; Obrist, Dominik (2021). A high-throughput hybrid task and data parallel Poisson solver for large-scale simulations of incompressible turbulent flows on distributed GPUs. Journal of computational physics, 437, p. 110329. Elsevier 10.1016/j.jcp.2021.110329

[img] Text
1-s2.0-S0021999121002242-main.pdf - Published Version
Restricted to registered users only
Available under License Publisher holds Copyright.

Download (2MB) | Request a copy

The solution of the pressure Poisson equation arising in the numerical solution of incompressible Navier–Stokes equations (INSE) is by far the most expensive part of the computational procedure, and often the major restricting factor for parallel implementations. Improvements in iterative linear solvers, e.g. deploying Krylov-based techniques and multigrid preconditioners, have been successfully applied for solving the INSE on CPU-based parallel computers. These numerical schemes, however, do not necessarily perform well on GPUs, mainly due to differences in the hardware architecture. Our previous work using many P100 GPUs of a flagship supercomputer showed that porting a highly optimized MPI-parallel CPU-based INSE solver to GPUs, accelerates significantly the underlying numerical algorithms, while the overall acceleration remains limited (Zolfaghari et al. [3]). The performance loss was mainly due to the Poisson solver, particularly the V-cycle geometric multigrid preconditioner. We also observed that the pure compute time for the GPU kernels remained nearly constant as grid size was increased. Motivated by these observations, we present herein an algebraically simpler, yet more advanced parallel implementation for the solution of the Poisson problem on large numbers of distributed GPUs. Data parallelism is achieved by using the classical Jacobi method with successive over-relaxation and an optimized iterative driver routine. Task parallelism is enhanced via minimizing GPU-GPU data exchanges as iterations proceed to reduce the communication overhead. The hybrid parallelism results in nearly 300 times less time-to-solution and thus computational cost (measured in node- hours) for the Poisson problem, compared to our best-case scenario CPU-based parallel implementation which uses a preconditioned BiCGstab method. The Poisson solver is then embedded in a flow solver with explicit third-order Runge-Kutta scheme for time-integration, which has been previously ported to GPUs. The flow solver is validated and computationally benchmarked for the transition and decay of the Taylor-Green Vortex at Re = 1600 and the flow around a solid sphere at Re D = 3700. Good strong scaling is demonstrated for both benchmarks. Further, nearly 70% lower electrical energy consumption than the CPU implementation is reported for Taylor-Green vortex case. We finally deploy the solver for DNS of systolic flow in a bileaflet mechanical heart valve, and present new insight into the complex laminar-turbulent transition process in this prosthesis.

Item Type:

Journal Article (Original Article)

Division/Institute:

10 Strategic Research Centers > ARTORG Center for Biomedical Engineering Research > ARTORG Center - Cardiovascular Engineering (CVE)

Graduate School:

Graduate School for Cellular and Biomedical Sciences (GCB)

UniBE Contributor:

Obrist, Dominik

ISSN:

0021-9991

Publisher:

Elsevier

Language:

English

Submitter:

Dominik Obrist

Date Deposited:

11 Feb 2022 10:56

Last Modified:

05 Dec 2022 16:02

Publisher DOI:

10.1016/j.jcp.2021.110329

BORIS DOI:

10.48350/164025

URI:

https://boris.unibe.ch/id/eprint/164025

Actions (login required)

Edit item Edit item
Provide Feedback