Advancing basic sciences and engineering requires access to large computational resources. Simulation Laboratories (SimLabs) are a new concept aiming to address the efficient usage of today's supercopmuters for scientific applications, by engaging a community-oriented research and support team located at supercomputer centers.
Through active research, the Cyprus Institute SimLab builds and provides expertise on present and future supercomputing architectures. Its research program encompasses both fundamental physics topics and algorithmic developments. A particular focus of active research is Lattice Quantum Chromodynamics (QCD), which allows for ab initio simulations of the strong force of the Standard Model of elementary particle physics.
The Cyprus Institute, in partnership with the Jülich Supercomputing Center (JSC) and DESY-Zeuthen, operates a SimLab in "Nuclear and Particle Physics". The SimLab group at the Computation-based Science and Technology Research Center (CaSToRC) carries out cross-disciplinary activities in algorithmic and code development in several CyI research themes such as Climate Modeling. One of the central activities is ab initio simulation of Quantum Chromodynamics on the lattice (lattice QCD).
Lattice QCD has been established as a well-scaling application, currently running across hundreds of thousands of cores on multi peta-scale supercomputers.
It is used as a pilot application for novel supercomputer designs and has catalyzed the development of commercial supercomputer solutions such as the BlueGene series.
The SimLab researchers provide high-level technical support on computational issues that span across CyI's and the region's scientific disciplines including:
- Profiling and benchmarking community application codes
- Scaling application codes to take advantage of Europe's competitive computer time allocation calls (e.g. PRACE)
- Porting of application kernels on novel computer architectures (e.g. GPUs and Intel Xeon Phi)
Speeding-up application codes requires both algorithmic improvements as well as refactoring of software codes. Implementing the improved ARPACK-CG algorithm sped-up the demanding part of the calculation, however pre- and post-processing, as well as I/O, became the new bottleneck, bringing only a speed-up of 4x. The full speed-up of over 20x was only possible after refactoring these components in a new code.
- Constantia Alexandrou
- Simone Bacchio
- Theodoros Christoudias
- Jacob Finkenrath
- Kyriakos Hadjiyiannakou
- Giannis Koutsou
- Davide Nole
- Shuhei Yamamoto
- Center: CaSToRC