- Lectures notes from start of course through Oct 12.
(Topical summary on course web page:
http://www.cis.udel.edu/~saunders/courses/372/01f/
- Textbook: Chapters 1, 2, 3, 4, 5, 7(thru 7.4), 11,
- Individual Lab 1, Group Project 1.
- paradigms of parallel computing: data parallel, task parallel, pipelining
- parallel architectures: fine grain parallelism in a uniprocessor,
SIMD, vector machines, array processors,
MIMD, uniform shared memory, nonuniform
shared memory, distributed memory, distributed shared memory
- SPMD versus MIMD style programming
- interconnection networks: topologies and advantages and disadvantages
- basic MPI program components and format and purpose of each component
- standard message passing in MPI: purpose of each field
- problems in parallel programming: deadlock, nondeterminism and races,
load imbalance, communication overhead versus computation per process:
what are these problems, symptoms, causes, approaches to dealing with them
- example application: numerical integration
- performance evaluation
The exam is closed book, closed neighbor and you will have the full class period to work. You will be given a list of MPI commands with their parameters for reference. In general, the exam will be a combination of testing your basic knowledge and understanding of the concepts covered in class and application of the concepts. The questions will be of the form:
// r has been initialized to my rank, p is the number of processes. int A = r; MPI_Send(&A, 1, MPI_INT, (r+1)%p, 0, MPI_COMM_WORLD); MPI_Recv(&A, 1, MPI_INT, (r-1+p)%p, 0, MPI_COMM_WORLD); if (r == 0) printf("%d\n", A);What happens in a buffered message MPI implementation (as on the Alphas)?
Please answer each question as well as you can. Partial credit will be given when possible on any question in the exam.
Review your lecture notes, labs, and textbook chapters. Rewrite some of the simple programs on your own, given a specification of what the program is supposed to do.