2. Parallelization
Jan 13, 2021
Motivations for parallel computation
Recall Moore's law: every 18 months transistor density doubles. There are problems with this: heating, speed of light, which physically limit the ability to grow. How to continue improving computational speed?
Pipelining
Typical processor workflow consists of: 1. Fetch instructions 2. Decode 3. Read 4. Compute 5. Write
Next consider processor workflows:
1 2 3 4 5 |
|
The work for 1 processor is \(6N\) (for 6 steps). In pipelining model work is \(6(N-1)\). In the 90s, introduction of pipelining allowed linear growth of Moore's Law.
Problem: how to maintain exponential growth?
Early parallelization
Attempts to parallelize computation followed. There were issues with this because changing architecture would break the software; parallel software is not portable.
Followed by:
- Introduction of memory models to simulate sequential models on a high level
- Introduction of multicore processors
- Parallel algorithms become important → even with better hardware, software must be run across multiple cores to obtain benefit
Distributed vs. parallel computation
Distributed system - geographically distributed processors connected by wires - use message passing to communicate
Parallel system - tightly coupled processors - communicate through shared memory
Milestones
1940s
First use (?) of parallel computation was in breaking the enigma encryption
1978 PRAM - parallel random access machine
Illusion of RAM model on a parallel architecture, but it has issues:
- has issues synchronizing computation between processors → has to slow down to synchronize
- high bandwidth access with wire → physical architecture does not scale
- assumption of failure free which is not true when \(N\) increases
Despite these issues, PRAM model is the foundation for ~70% of distributed and parallel algorithms.
PRAM Model