Reproducible Benchmarking of Parallel Stencil Codes

Guerrera, Danilo and Maffia, Antonio and Burkhart, Helmar. (2016) Reproducible Benchmarking of Parallel Stencil Codes.

Full text not available from this repository.

Official URL: http://edoc.unibas.ch/53292/

Downloads: Statistics Overview


State of the art in performance reporting in the High Performance Computing field is omitting details that are important in order to be able to test and reuse them, affecting what is considered to be a pillar of science since 17 th century: the scientific method. Every scientist must be able to understand and extend the work of another. Modern architectures are extremely complex and scientists often focus their efforts on what they are pursuing, ignoring the importance of making their science reproducible. We acknowledge the lack of time and the effort necessary to follow good practices in conducting research in computational science, yet consider reproducibility to be extremely important. For this reason, we first designed and implemented a software framework, prova!, to help scientists in their research allowing them to focus on their core business and taking care of its reproducibility, and then used it in our own research on benchmarking of parallel stencil codes. Stencil codes are an important and widely used pattern in computational science, but, due to their low arithmetic intensity, it is tough to achieve a good performance. For this reason several approaches to stencil computation have been proposed and several stencil compilers implemented. Starting from the problem of stencil computation we dive into stencil compilers. How to compare them? Many factors can affect the outcome of a comparison, such as the stencil itself (what if we change the stencil computation?), the architecture (does compiler A always outperform compiler B, even on a different architecture?). How to evaluate them while changing both the stencil to compute and the architecture? Using prova! we can provide a comparison of the same stencil on different architectures and different stencils on both the same or different machines. Benchmarking performance models are needed to individuate the relevant parameters. We plan to interpret the performance data by means of models such as the Execution-Cache-Memory Performance Model and evaluate how performance is affected by explicitly pinning the threads, following different pinning strategies. The goal is to understand how compilers behave in relation to the architecture chosen and to state a subset of parameters to take into account, in order to predict the final performance. Furthermore, we propose a standardized way of describing an experiment, extrapolating a minimum amount of information needed for reproducibility purposes. We try to address the questions: “Are the results of our research reproducible? Can other scientists trust our conclusions?”. For this reason we conduct our research using prova!, a framework which will strengthen and give credibility to our results: through it we make available source code, dependencies, environment, build process running and post-processing scripts, and the raw data used for generating the graphs. Conducting research this way has several positive effects: you remain with a complete documentation, it helps the follow-up of studies allowing to build upon existing work and, in case of discrepancies during a replication, it helps in identifying and addressing the root of the problem.
Faculties and Departments:05 Faculty of Science > Departement Mathematik und Informatik > Ehemalige Einheiten Mathematik & Informatik > High Performance and Web Computing (Burkhart)
05 Faculty of Science > Departement Mathematik und Informatik > Informatik > High Performance Computing (Ciorba)
UniBasel Contributors:Guerrera, Danilo and Maffia, Antonio and Burkhart, Helmar
Item Type:Other
Publisher:20th International Supercomputing Conference (ISC 2016)
Note:Publication type according to Uni Basel Research Database: Other publications
Last Modified:02 Nov 2020 12:12
Deposited On:02 Nov 2020 12:12

Repository Staff Only: item control page