Thursday 15 Oct 2020: Whence and whither HPC?
David Acreman - University of Exeter, Physics and Astronomy
High-performance computing (HPC) has delivered truly exponential growth in performance over a number of decades, and access to high-end computing has underpinned many scientific developments and discoveries. Computational science requires access to suitable hardware, software and algorithms in order to address scientific problems. As we look ahead to the future of scientific and high-performance computing it seems inevitable that there must be significant changes in all these areas if we are to continue benefitting from increasing performance and productivity.
The Isambard HPC service (operated by the GW4 consortium and the UK Met Office) provides access to cutting-edge hardware and was the first production HPC system in the world to be based on the ARM64 architecture. Isambard is undergoing a major update which will provide access to the next generation of new and novel HPC hardware. I will review some of the experiences with Isambard thus far and look ahead to what we can expect from the upgraded hardware.
To make effective use of hardware developments we frequently need to consider corresponding developments in algorithms and software. As HPC progresses towards the exa-scale era we are called upon to exploit ever increasing levels of parallelism in the hardware. Tried and tested techniques, such as domain decomposition, can reach scaling limitations on highly parallel systems and we need to look elsewhere for further parallelism. One opportunity for increasing parallelism in initial value problems is to use time-parallel methods. I will present results from applying the REXI (Rational Approximation to Exponential Integrators ) time stepping method to a finite-element solution of the shallow water equations, implemented using the domain specific language Firedrake.