Table of Contents

AbstractParallel computing OverviewConcepts and also TerminologyParallel computer system Memory ArchitecturesParallel Programming ModelsDesigning Parallel ProgramsParallel Examples


This is the first tutorial in the "Livermore computing Getting Started" workshop. It is intended to carry out only a quick overview that the substantial and broad topic the Parallel Computing, as a lead-in because that the tutorials the follow it. Together such, the covers just the very basics that parallel computing, and also is intended for someone that is just ending up being acquainted through the subject and also who is plan to to visit one or more of the other tutorials in this workshop. The is not intended come cover Parallel Programming in depth, as this would require significantly much more time. The tutorial begins with a conversation on parallel computer - what the is and how it"s used, complied with by a discussion on concepts and terminology linked with parallel computing. The subject of parallel memory architectures and programming models room then explored. These topics are followed by a series of valuable discussions top top a variety of the facility issues pertained to designing and also running parallel programs. The accuse concludes v several instances of exactly how to parallelize several straightforward problems. Recommendations are included for additional self-study. 


What is Parallel Computing?

Serial Computing

Traditionally, software has actually been composed for serial computation:

A difficulty is damaged into a discrete collection of instructionsInstructions are executed sequentially one ~ anotherExecuted on a solitary processorOnly one instruction might execute at any kind of moment in time



For example:



Parallel Computing

In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to fix a computational problem:

A trouble is broken into discrete parts that deserve to be addressed concurrentlyEach part is further broken down to a collection of instructionsInstructions indigenous each component execute concurrently on various processorsAn overall control/coordination system is employed



For example:



The computational problem should be able to:Be broken apart into discrete pieces of work that have the right to be fixed simultaneously;Execute multiple regime instructions at any type of moment in time;Be resolved in less time v multiple compute sources than through a single compute resource.The compute sources are typically:A single computer v multiple processors/coresAn arbitrary number of such computers linked by a networkParallel ComputersVirtually every stand-alone computer systems today space parallel native a hardware perspective:Multiple practical units (L1 cache, L2 cache, branch, prefetch, decode, floating-point, graphics processing (GPU), integer, etc.)Multiple execution units/coresMultiple hardware threads


IBM BG/Q Compute Chip with 18 cores (PU) and 16 L2 Cache units (L2)


Networks affix multiple stand-alone computer systems (nodes) to make bigger parallel computer clusters.

You are watching: Read-only memory differs from random access memory due to its ability to _____ store instructions.


For example, the schematic listed below shows a typical parallel computer system cluster:Each compute node is a multi-processor parallel computer in itselfMultiple compute nodes space networked in addition to an Infiniband networkSpecial objective nodes, also multi-processor, are supplied for other purposes

The majority of the world"s huge parallel computers (supercomputers) are clusters the hardware created by a grasp of (mostly) famed vendors.




Why use Parallel Computing?

The Real people is massively ComplexIn the natural world, numerous complex, interrelated events are happening in ~ the very same time, yet within a temporal sequence.Compared come serial computing, parallel computing is much far better suited because that modeling, simulating and also understanding complex, real world phenomena.For example, imagine modeling this serially:


Main reasons for making use of Parallel ProgrammingSAVE TIME AND/OR MONEYIn theory, throwing much more resources at a task will shorten its time to completion, with potential price savings.Parallel computers can be developed from cheap, commodity components.

SOLVE larger / MORE complex PROBLEMSMany troubles are so large and/or complex that it is impractical or impossible to solve them using a serial program, particularly given minimal computer memory.Example: web search engines/databases processing millions of transactions every second

PROVIDE CONCURRENCYA single compute resource can only do one thing at a time. Multiple compute resources deserve to do many things simultaneously.Example: cooperation Networks carry out a an international venue where human being from approximately the people can meet and conduct work-related "virtually".

See more: Humble Pie 30 Days In The Hole Lyrics, 30 Days In The Hole

TAKE benefit OF NON-LOCAL RESOURCESUsing compute resources on a large area network, or also the internet when local compute sources are scarce or insufficient.