CISC 372 Parallel Programming
Course Syllabus (Spring 2013)

Course objectives overview: See course catalog description More specically, our goal is to build conceptual and operational skills for multiprocessor programming. We will become well acquainted with the message passing library MPI suitable for distributed (and shared) memory multprocessors and the annotation system openMP for shared memory multiprocessors. Beyond that we will touch on other parallel programming tools which may include pthreads, GPUs, STL algorithms, cilk, etc.

Meeting Times: WeFr 8:40AM-9:55AM, Purnell Hall Room 233A (Feb 6, 2013 through May 10, 2013, plus final exam)

Instructor: David Saunders,, home page , Office Hours: 10am-12pm Thursdays in 414 Smith Hall.

Teaching Assistant: Maria Ruiz Varela,, Office Hours: 1pm-3pm Wednesdays in 102 Smith Hall.

Required Textbook: Peter S. Pacheco, An Introduction to Parallel Programming(links to source code, errata), Morgan Kaufmann Publishers, 2011.

Online resources:

Other References: Programming Environment and Computer Usage:

We will write parallel programs using MPI (Message Passing Interface). MPI is a library of routines that allows programs running in parallel to talk to each other, and send data back and forth. These programs will be compiled and executed three SiCortex machines providing 72 cores each and -- as circumstances permit -- on some other architectures. If you don't have an eecis academic or research account, you must obtain one as soon as possible. Go to and click on "Apply for Account". Ask for an academic account. For information on using the SiCortices, join the Sakai "sicortex" site and read the wiki.


Sketch of Topics and Pace We will work our way through most of the Pacheco textbook at roughly the pace of two weeks per chapter. But we will move faster on Chapters 1 and 2, dwell on chapters 3 and 4 (MPI and openMP), and then see how it goes for the rest. We'll learn how to use most of the functionality of the MPI message passing system. However the emphasis will be on effective use of the most important MPI functions. For shared memory systems we'll compare MPI programs to programs using openMP and/or pthreads. We will also look at some other programmer tools for expressing and analyzing parallel programs. Reading and homework assignments will be announced and provided via Sakai.

Late Assignment Submission Policy:

> Late assignments will be penalized 10% per day late, and accepted up to one week/70% penalty late. This includes weekends. It is up to you to determine the version of your assignment to be graded. You must weigh the late penalty against the completeness of your assignment.

The due dates are to be taken seriously and you should not expect them to be extended. The pace of work is implicit in the due dates. NO late programs or homeworks will be accepted for full credit without discussion with me PRIOR to the due date.

Regrading Policy:

If you are dissatisfied with a grade on a homework, programming assignment, or exam, you should consult the instructor directly within a week of the day the graded assignment was returned to you. No regrade requests will be considered after this week period. If the TA graded the work, consult the TA first.

Policy on Academic Dishonesty:

You are permitted to consult orally with other students and professors on any conceptual problems or for debugging assistance on all programming assignments. Any evidence of collaboration other than this kind will be handled as stated in the Official Student Handbook of the University of Delaware. All writing of text and code must be your own, without unattributed use of or paraphrase of the work of classmates or other people. If you are in doubt regarding the requirements, please consult with me before you submit any assignment of this course.