The students will learn fundamental issues in the design and development of parallel programs for various types of parallel computers. They are able to develop complex parallel algorithms, to analyze them and evaluate their efficiency. Students will also learn how to deal with problems that require parallelization. They will also be introduced to visualization techniques useful in the analysis of engineering and scientific data.
Parallel architectures and parallel programming models, speedup, efficiency, scalability, linear systems of equations, sparse matrices and graphs, partitioning methods, iterative methods, coloring schemes, incomplete factorizations, domain decomposition and Schwarz iterative methods, preconditioning.
Parallel programming with MPI. The basic structure and commands of MPI programs are taught. As a physical application, simulations using four-dimensional discretized lattices are studied. The division of the lattice among processors is discussed.
Introduction to scientific visualization: two- and three-dimensional data types; visual representation schemes for scalar, vector, and tensor data; isosurface and volume visualization methods; visual monitoring; interactive steering.
- This course assumes basic programming skills in C or Fortran
- Knowledge in compiling and running self written and third party code (libraries)
- Familiarity with command line tools
Bibliography and teaching material:
- The course uses several on-line sources for teaching material including:
- PRACE PATC on-line training courses
- XSEDE on-line training material
- Training material from Lawrence Livermore National Laboratory
- For GPU programming:
- CUDA by Example: An Introduction to General-Purpose GPU Programming, J. Sander and E. Kandort
- The following assessment methods will be combined for the final grade:
- Homework exercises
- In-class exercises
- A final examination