ENEE651/CMSC751: Parallel Algorithms, Spring 2016
Time and Location
- Dr. U. Vishkin, firstname.lastname@example.org
- Office hours: Tu 5:00-6:00 (by appointment) at AVW 2365
- Ankit Mondal, email@example.com
- Office hours: MW 2:00-3:00 at AVW 1145
Tutorial, Manual, Software Tools and Other
Homework 1: Exercises 1-5. Due: February 19.
Homework 2: Exercises 6-11. Due: March 3.
Homework 3: Exercises 13-16. Note: Exercise 16 is challenging. Due: March 22.
Homework 4: Exercises 18,20-24. Due: March 31.
Homework 5: Exercises 25-31. Due: April 12.
Homework 6: Exercises 35-40. Due: April 26.
Homework 7: Exercises 41-47. Due: May 3.
- Introduction to the theory of parallel algorithms.
The course will highlight parallel algorithmic thinking for obtaining
good speed-ups with respect to serial algorithms.
The main objective of the class presentations and the written homework will be to study the main theory of parallel algorithms.
This includes the design and analysis of parallel algorithms--primarily standard asymptotic analysis.
The main objective of the programming assignments will be to reduce this knowledge to
practice for improved understanding.
One highlight will be achieving hard speedups for the more advanced parallel algorithms studied, demonstrating to students that
they can do it even within the limited scope of course programming assignments.
The programs will run on
real parallel hardware whose design objectives included cost-effective support for the parallel algorithms theory studied.
- Most opportunities for performance growth in computing over the last decade have
been tied to their exploitation of parallelism. Parallel computing has made big strides and
is being used on an unprecedented scale within companies like Google and Facebook,
for supercomputing applications and in the form of GPUs and multi-cores.
The class will review some of the most exciting applied research
issues facing computer science and engineering, as a result of these opportunities. In
particualr, it will include a close examination of the
the extent to which the current ``Billion-transistor-per-chip'' era provides a way for
building a truly general-purpose parallel computer system on-chip.
Of Special Interest
- 4-6 parallel programming assignments will be given.
- To be run on the Paraleap, the UMD XMT 64-processor computer.
- Basic knowledge and understanding of algorithms and data structures
and computing systems.
- Class notes and research papers will be provided by the instructor.
Please download and print out the first item (104 pages) under
J. JaJa, An Introduction to Parallel Algorithms, Addison Wesley, 1992.
D.E. Culler and J.P. Singh, Parallel Computer Architecture, Morgan Kaufmann,
1999. The following quote appears under the title Potential Breakthroughs:
..."breakthrough may come from architecture" ... "that is,
to truly design a machine that can look to the programmer like a PRAM".
- J.L. Hennessy and D.A. Patterson. Computer Architecture a Quantitative
Approach, 5th Edition. Morgan-Kaufmann, San Francisco, 2011.
Some Core Topics:
- Introduction to Parallel Algorithms:
Performance of Parallel Algorithms Including Work and Depth, Optimality Notions
- Basic Techniques: Balanced Trees, Pointer Jumping, Divide-and-Conquer,
Partitioning, Pipelining, Accelerated Cascading,
- Trees and Lists: List Ranking, The Euler Tour Techniques,
Tree Connection, Evaluation of Arithmetic Expressions
- Searching, Merging and Sorting
- Graphs: Connected Components, Transitive Closure, Paths Problems
- Strings: Some string matching algorithms
- Introduction to Parallel Processing
- Introduction to state-of-the-art of parallel computing and discussion of the
question: Can fine-grained large scale on-chip parallel computing be harnessed
towards faster completion time of a single general-purpose computing
- 10% - Written homework.
- 25% - Programming projects. You must submit all programming projects
to get a grade.
- 25% - 1st mid-term exam. Exam date: March 10, in class.
- 40% - 2nd mid-term exam. Exam date: May 5, in class. There will be no final.