Parallel Scientific Computing

Parallel Scientific Computing

With the growing complexity of computer simulations and the availability of multi-core processors, parallel computing becomes important to all fields of science and technology. This course covers parallel high-performance computing on all levels: from the basics to high-level parallelism and grid computing. It is a hands-on course with practical programming exercises.


Parallel programming paradigms, vectorization, shared-memory and multi-core programming, OpenMP, multi-threading, the Message Passing Interface (MPI), non-determinism in parallel programs, parallel debugging, domain decomposition schemes, communication scheduling methods, parallel linear algebra and parallel solvers, data structures and abstractions, parallel algorithms and libraries, grid computing, resource allocation models.

Learning goals
  • Choosing the proper programming paradigm for an application
  • Shared memory implementation using OpenMP
  • Distributed memory implementation using MPI
  • Knowing parallel algorithms, data structures, and numerical solvers
  • Implementing loosely coupled applications on a grid of workstations

Lecture language: ENGLISH

Please find below the lecture syllabus. Lecture slides and teaching material will be posted here during the semester:

  • week 1, 28.9.2010 (IS): admin/intro; parallel programming paradigms; performance evaluation; definitions; vectorization and pipelining; loop unrolling
  • week 2, 5.10.2010 (IS): MPI 1: message passing basics; point-to-point communications; collective communications
  • week 3, 12.10.2010 (PA): shared memory programming model; threads vs. processes; generation/administration/coordination of threads
  • week 4, 19.10.2010 (PA): basic OpenMP, loop vectorization on multi-cores, the fork-join principle of OpenMP; Variations on the parallel construct; shared vs. private variables; beyond loop-parallelism
  • week 5, 26.10.2010 (PA): Hybrid multi-threading/multi-processing by combining OpenMP with MPI
  • week 6, 2.11.2010 (PA): parallel matrix-vector product
  • week 7, 9.11.2010: guided tour to the HPC center of ETH
  • week 8, 16.11.2010 (IS): MPI 2: derived data types; data packing/unpacking; process groups; communicators; MPI topologies; non-determinism; error handling; parallel debugging.
  • week 9, 23.11.2010 (IS): domain decomposition; load balancing; communication scheduling; graph partitioning algorithms
  • week 10, 30.11.2010 (PA): parallel iterative solvers; PCG/GMRES; Newton iteration
  • week 11, 7.12.2010 (IS): parallel data and operator abstractions; parallel algorithms 1: sorting
  • week 12, 14.12.2010 (IS): parallel algorithms 2: tree search, distributed termination detection, fast N-body solvers; overview of parallel libraries (ScaLAPACK, PetSc, POOMA, PPM, ALPS)
  • week 13, 21.12.2010 (IS): grid computing; performance indicators; resource allocation; the Gamma model