- Forward


Multiprocessing
An Introduction


Prof. David Bernstein
James Madison University

Computer Science Department
bernstdh@jmu.edu

Print

Review
Back SMYC Forward
  • Flow of Control:
    • A sequence of control transfers (i.e., transitions between instructions/steps)
  • Concurrent Flows:
    • Two flows, \(X\) and \(Y\), are said to running concurrently iff \(Y\) began after \(X\) began and before \(X\) finished or \(X\) began after \(Y\) began and before \(Y\) finished
  • Coordination:
    • Solving problems using concurrent flows often involves coordination
Parallel Flows
Back SMYC Forward
  • Parallel Multiprocessing:
    • The ability to execute multiple processes simultaneously
  • Parallel Multithreading:
    • The ability to execute multiple threads within a single process simultaneously
Hardware Architectures
Back SMYC Forward
  • Multicore Processing:
    • The CPU has multiple distinct processing cores (each with its own instruction control unit, arithmetic and logical unit, and cache) capable of executing instructions
  • Symmetric Multiprocessing (SMP):
    • Multiple interconnected CPUs within a single host sharing the same memory
  • Cluster Multiprocessing:
    • Multiple network-interconnected hosts each with its own memory (including grid computing in which the hosts are heterogenous)
Decomposition Schemes
Back SMYC Forward
  • Tasl/Functional Decomposition:
    • The problem is decomposed into different tasks involving different data that can be executed in parallel
  • Data/Domain Decomposition:
    • The data are decomposed and the same task is performed in parallel on different pieces of data
Flynn's Taxonomy
Back SMYC Forward
  • Single-Instruction/Single-Data (SISD):
    • One operation is performed on one piece of data
  • Single-Instruction/Multiple-Data (SIMD):
    • One operation is performed on multiple pieces of data
  • Multiple-Instruction/Single-Data (MISD):
    • Multiple operation are performed on a single piece of data
  • Multiple-Instruction/Multiple-Data (MIMD):
    • Multiple operation are performed on multiple pieces of data
Symmetry
Back SMYC Forward
  • Symmetric Systems:
    • All processors/hosts are similarly provisioned and execute the same program
  • Asymmetric Systems:
    • Master (Coordination)/Slave (Calculation) Configurations
    • Map (Calculation and Filtering)/Reduce (Synthesis) Configurations
Communication Patterns
Back SMYC Forward
  • None:
    • Usually only appropriate for "embarrassingly parallel" problems
  • Send and Receive:
    • One or both parties must block while communication takes place
  • Global Barrier:
    • All processors/hosts must use communication to synchronize at one or more points in time
  • Broadcast/Scatter and/or Reduce/Gather
    • One to many and/or many to one communications are used
Libraries
Back SMYC Forward
  • GPUs:
    • Compute Unified Device Architecture (CUDA), Open CL, Open ACC
  • Clusters:
    • OpenMPI, MPICH
  • SMPs:
    • OpenMP
There's Always More to Learn
Back -