Course detail

Parallel Computations on GPU

FIT-PCGAcad. year: 2020/2021

The course covers the architecture and programming of graphics processing units by the NVidia and partially AMD. First, the architecture of GPUs is studied in detail. Then, the model of the program execution using hierarchical thread organisation and the SIMT model is discussed. Next, the memory hierarchy and synchronization techniques are described. After that, the course explains novel techniques of dynamic parallelism and data-flow processing concluded by practical usage of multi-GPU systems in environments with shared (NVLink) and distributed (MPI) memory. The second part of the course is devoted to high level programming techniques and libraries based on the OpenACC technology.

Language of instruction

Czech

Number of ECTS credits

5

Mode of study

Not applicable.

Learning outcomes of the course unit

Knowledge of the parallel programming on GPUs in the area of general purpose computing, orientation in the area of accelerated systems, libraries and tools.  
Understanding of hardware limitations having impact on the efficiency of software solutions. 

Prerequisites

Knowledge gained in courses AVS and partially in PRL and PPP.

Co-requisites

Not applicable.

Planned learning activities and teaching methods

Not applicable.

Assesment methods and criteria linked to learning outcomes

Assessment of two projects, 14 hours in total and, computer laboratories and a midterm examination.
Exam prerequisites:
To get 20 out of 40 points for projects and midterm examination.

Course curriculum

Not applicable.

Work placements

Not applicable.

Aims

To familiarize yourself with the architecture and programming of graphics processing unit in the area of general purpose computuing using the NVidia libraries and OpenACC standard. To learn how to design and implement accelerated programs exploiting the potential of GPUs. To gain knowledge about the available libraries for programming on GPUs.

Specification of controlled education, way of implementation and compensation for absences

  • Missed labs can be substituted in alternative dates.
  • There will be a place for missed labs in the last week of the semester.

Recommended optional programme components

Not applicable.

Prerequisites and corequisites

Not applicable.

Basic literature

Not applicable.

Recommended reading

Current PPT slides for lectures (EN)
Kirk, D., and Hwu, W.: Programming Massively Parallel Processors: A Hands-on Approach, Elsevier, 2010, s. 256, ISBN: 978-0-12-381472-2. download.
Nvidia CUDA documentation: https://docs.nvidia.com/cuda/ (EN)
OpenACC documentation: https://www.openacc.org/ (EN)
Storti,D., and Yurtoglu, M.: CUDA for Engineers: An Introduction to High-Performance Parallel Computing, Addison-Wesley Professional; 1 edition, 2015. ISBN 978-0134177410. link.

Classification of course in study plans

  • Programme MITAI Master's

    specialization NISY , 0 year of study, winter semester, elective
    specialization NADE , 0 year of study, winter semester, elective
    specialization NBIO , 0 year of study, winter semester, elective
    specialization NCPS , 0 year of study, winter semester, elective
    specialization NEMB , 0 year of study, winter semester, elective
    specialization NHPC , 0 year of study, winter semester, compulsory
    specialization NGRI , 0 year of study, winter semester, elective
    specialization NIDE , 0 year of study, winter semester, elective
    specialization NISD , 0 year of study, winter semester, elective
    specialization NMAL , 0 year of study, winter semester, elective
    specialization NMAT , 0 year of study, winter semester, elective
    specialization NNET , 0 year of study, winter semester, elective
    specialization NSEC , 0 year of study, winter semester, elective
    specialization NSEN , 0 year of study, winter semester, elective
    specialization NSPE , 0 year of study, winter semester, elective
    specialization NVER , 0 year of study, winter semester, elective
    specialization NVIZ , 0 year of study, winter semester, elective

Type of course unit

 

Lecture

26 hod., optionally

Teacher / Lecturer

Syllabus

  1. Architecture of graphics processing units.
  2. CUDA programming model, tread execution.
  3. CUDA memory hierarchy.
  4. Synchronization and reduction.
  5. Dynamic parallelism and unified memory.
  6. Design and optimization of GPU algorithms.
  7. Stream processing, computation-communication overlapping.
  8. Multi-GPU systems.
  9. Nvidia Thrust library.
  10. OpenACC basics.
  11. OpenACC memory management.
  12. Code optimization with OpenACC.
  13. Libraries and tools for GPU programming.

Exercise in computer lab

12 hod., compulsory

Teacher / Lecturer

Syllabus

  1. CUDA: Memory transfers, simple kernels
  2. CUDA: Shared memory
  3. CUDA: Texture and constant memory
  4. CUDA: Dynamic parallelism and unified memory.
  5. OpenACC: basic techniques.
  6. OpenACC: advanced techniques.

Project

14 hod., compulsory

Teacher / Lecturer

Syllabus

  • Development of an application in Nvidia CUDA
  • Development of an application in OpenACC