Deep learning processor
A deep learning processor (DLP), or a deep learning accelerator, is an electronic circuit designed for deep learning algorithms, usually with separate data memory and dedicated instruction set architecture. Deep learning processors range from mobile devices, such as neural processing units (NPUs) in Huawei cellphones,to cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform.
- Comment
- enA deep learning processor (DLP), or a deep learning accelerator, is an electronic circuit designed for deep learning algorithms, usually with separate data memory and dedicated instruction set architecture. Deep learning processors range from mobile devices, such as neural processing units (NPUs) in Huawei cellphones,to cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform.
- Has abstract
- enA deep learning processor (DLP), or a deep learning accelerator, is an electronic circuit designed for deep learning algorithms, usually with separate data memory and dedicated instruction set architecture. Deep learning processors range from mobile devices, such as neural processing units (NPUs) in Huawei cellphones,to cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform. The goal of DLPs is to provide higher efficiency and performance for deep learning algorithms than general central processing unit (CPUs) and graphics processing units (GPUs) would. Most DLPs employ a large number of computing components to leverage high data-level parallelism, a relatively larger on-chip buffer/memory to leverage the data reuse patterns, and limited data-width operators for error-resilience of deep learning. Deep learning processors differ from AI accelerators in that they are specialized for running learning algorithms, while AI accelerators are typically more specialized for inference. However, the two terms (DLP vs AI accelerator) are not used rigorously and there is often overlap between the two.
- Is primary topic of
- Deep learning processor
- Label
- enDeep learning processor
- Link from a Wikipage to an external page
- github.com/baidu-research/DeepBench
- dawn.cs.stanford.edu/benchmark/
- www.benchcouncil.org/AIBench/index.html
- mlperf.org
- Link from a Wikipage to another Wikipage
- AI accelerator
- Category:Computer optimization
- Category:Deep learning
- Central processing unit
- Cerebras
- Cloud computing
- Complex instruction set computer
- Computer memory
- Deep learning
- Electronic circuit
- Field-effect transistor
- Field-programmable gate array
- Floating-gate
- Frequency comb
- Google Cloud Platform
- Graphics processing unit
- Hardware accelerator
- Huawei
- In-memory processing
- Instruction set architecture
- Molybdenum disulphide
- Multiplexing
- Multiply–accumulate operation
- Photonic
- Photonic integrated circuit
- Photonics
- Semiconductors
- Single instruction, multiple data
- Single instruction, multiple threads
- Tensor processing unit
- Very long instruction word
- Wavelength
- SameAs
- C5SQi
- Q96376128
- Процессор глубокого обучения
- 深度学习处理器
- Subject
- Category:Computer optimization
- Category:Deep learning
- WasDerivedFrom
- Deep learning processor?oldid=1103096895&ns=0
- WikiPageLength
- 22772
- Wikipage page ID
- 64152617
- Wikipage revision ID
- 1103096895
- WikiPageUsesTemplate
- Template:Hardware acceleration
- Template:Reflist
- Template:Short description