Data parallelism

Data parallelism

Data parallelism is parallelization across multiple processors in parallel computing environments. It focuses on distributing the data across different nodes, which operate on the data in parallel. It can be applied on regular data structures like arrays and matrices by working on each element in parallel. It contrasts to task parallelism as another form of parallelism.

Comment
enData parallelism is parallelization across multiple processors in parallel computing environments. It focuses on distributing the data across different nodes, which operate on the data in parallel. It can be applied on regular data structures like arrays and matrices by working on each element in parallel. It contrasts to task parallelism as another form of parallelism.
Depiction
Data Parallelism in matrix multiplication.png
Sequential vs. Data Parallel job execution.png
Has abstract
enData parallelism is parallelization across multiple processors in parallel computing environments. It focuses on distributing the data across different nodes, which operate on the data in parallel. It can be applied on regular data structures like arrays and matrices by working on each element in parallel. It contrasts to task parallelism as another form of parallelism. A data parallel job on an array of n elements can be divided equally among all the processors. Let us assume we want to sum all the elements of the given array and the time for a single addition operation is Ta time units. In the case of sequential execution, the time taken by the process will be n×Ta time units as it sums up all the elements of an array. On the other hand, if we execute this job as a data parallel job on 4 processors the time taken would reduce to (n/4)×Ta + merging overhead time units. Parallel execution results in a speedup of 4 over sequential execution. One important thing to note is that the locality of data references plays an important part in evaluating the performance of a data parallel programming model. Locality of data depends on the memory accesses performed by the program as well as the size of the cache.
Hypernym
Form
Is primary topic of
Data parallelism
Label
enData parallelism
Link from a Wikipage to an external page
dx.doi.org/10.1145/7902.7903
Link from a Wikipage to another Wikipage
Active message
C*
Category:Articles with example pseudocode
Category:Parallel computing
Central processing unit
Circuit simulation
Communications of the ACM
Concurrency (computer science)
Connection Machine
CUDA
Daniel Hillis
File:Data Parallelism in matrix multiplication.png
File:Sequential vs. Data Parallel job execution.png
Graphics processing unit
Guy Steele
Instruction level parallelism
Load balancing (computing)
Locality of reference
Matrix multiplication
Message Passing Interface
OpenACC
OpenMP
Parallel computing
Parallel programming model
Pseudocode
RaftLib
Scalable parallelism
Shared memory
SIMD
SPMD
Task parallelism
Threading Building Blocks
Thread level parallelism
Vector processor
SameAs
2u8Po
Data parallelism
Data parallelism
m.028b1y4
Memorie paralelă
Paral·lelisme de Dades
Paralelismo de datos
Parallélisme de donnée
Q3124522
Паралелизам података
Паралелізм даних
දත්ත සමාන්තරතාව
データ並列性
資料平行
Subject
Category:Articles with example pseudocode
Category:Parallel computing
Thumbnail
Sequential vs. Data Parallel job execution.png?width=300
WasDerivedFrom
Data parallelism?oldid=1115614339&ns=0
WikiPageLength
15144
Wikipage page ID
9467420
Wikipage revision ID
1115614339
WikiPageUsesTemplate
Template:ISBN
Template:Parallel Computing
Template:Reflist
Template:Short description
Template:Var