Convolutional neural network
In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of artificial neural network (ANN), most commonly applied to analyze visual imagery. CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Networks (SIANN), based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. Counter-intuitively, most convolutional neural networks are not invariant to translation, due to the downsampling operation they apply to the input. They have applications in image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial tim
- Comment
- enIn deep learning, a convolutional neural network (CNN, or ConvNet) is a class of artificial neural network (ANN), most commonly applied to analyze visual imagery. CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Networks (SIANN), based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. Counter-intuitively, most convolutional neural networks are not invariant to translation, due to the downsampling operation they apply to the input. They have applications in image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial tim
- Date
- enDecember 2018
- Depiction
- Has abstract
- enIn deep learning, a convolutional neural network (CNN, or ConvNet) is a class of artificial neural network (ANN), most commonly applied to analyze visual imagery. CNNs are also known as Shift Invariant or Space Invariant Artificial Neural Networks (SIANN), based on the shared-weight architecture of the convolution kernels or filters that slide along input features and provide translation-equivariant responses known as feature maps. Counter-intuitively, most convolutional neural networks are not invariant to translation, due to the downsampling operation they apply to the input. They have applications in image and video recognition, recommender systems, image classification, image segmentation, medical image analysis, natural language processing, brain–computer interfaces, and financial time series. CNNs are regularized versions of multilayer perceptrons. Multilayer perceptrons usually mean fully connected networks, that is, each neuron in one layer is connected to all neurons in the next layer. The "full connectivity" of these networks make them prone to overfitting data. Typical ways of regularization, or preventing overfitting, include: penalizing parameters during training (such as weight decay) or trimming connectivity (skipped connections, dropout, etc.) CNNs take a different approach towards regularization: they take advantage of the hierarchical pattern in data and assemble patterns of increasing complexity using smaller and simpler patterns embossed in their filters. Therefore, on a scale of connectivity and complexity, CNNs are on the lower extreme. Convolutional networks were inspired by biological processes in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field. CNNs use relatively little pre-processing compared to other image classification algorithms. This means that the network learns to optimize the filters (or kernels) through automated learning, whereas in traditional algorithms these filters are hand-engineered. This independence from prior knowledge and human intervention in feature extraction is a major advantage.
- Hypernym
- Network
- Is primary topic of
- Convolutional neural network
- Label
- enConvolutional neural network
- Link from a Wikipage to an external page
- www.completegate.com/2017022864/blog/deep-machine-learning-images-lenet-alexnet-cnn/all-pages
- cs231n.github.io/
- ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/
- Link from a Wikipage to another Wikipage
- 3D scanner
- Activation function
- Affine transformation
- AlexNet
- Alex Waibel
- Aliasing
- AlphaGo
- Amos Storkey
- Andrej Karpathy
- Anti-aliasing filter
- Apache License
- Apache Spark
- Aromaticity
- Artificial neural network
- Artificial neuron
- Atari 2600
- Attention (machine learning)
- Average
- Backpropagation
- Biological
- Biomarkers of aging
- Biomolecule
- Blondie24
- Boltzmann machine
- Brain–computer interface
- C (programming language)
- C++
- Caffe (software)
- Capsule neural network
- Category:Computational neuroscience
- Category:Computer vision
- Category:Neural network architectures
- Channel (digital image)
- Chinook (draughts player)
- CIFAR-10
- Clay tablet
- Complex cell
- Compute kernel
- Computer Go
- Computer vision
- Conformal prediction
- Convolution
- Coprocessor
- Cortical neuron
- CPU
- Cross entropy
- Cross-validation (statistics)
- C Sharp (programming language)
- CUDA
- Cuneiform
- Curse of dimensionality
- Curvature
- Data augmentation
- Database
- Data loss
- David B. Fogel
- David H. Hubel
- Decision boundary
- Deep belief network
- DeepDream
- Deep learning
- Deeplearning4j
- Deep neural network
- Deformation theory
- Deterministic algorithm
- Dimensionality reduction
- Dlib
- Dot product
- Downsampling (signal processing)
- Draughts
- Dropout (neural networks)
- Drug discovery
- Ebola virus
- Elastic deformation
- Elastic net regularization
- Electromyography
- Equivariant map
- Euclidean distance
- Euclidean norm
- Expected value
- Facial recognition system
- Feature (machine learning)
- Feature engineering
- File:Comparison image neural networks.svg
- File:Conv layer.png
- File:Conv layers.png
- File:Max pooling.png
- File:Neural Abstraction Pyramid.jpg
- File:RoI pooling animated.gif
- File:Typical cnn.png
- Filter (signal processing)
- Free parameter
- Frobenius inner product
- Generalization (learning)
- GigaMesh Software Framework
- GNU Go
- GoogLeNet
- Go ranks and ratings
- GPGPU
- GPU
- Graphics processing unit
- Ground truth
- Hydrogen bond
- Hyperbolic tangent
- Hyperparameter (machine learning)
- Hyperparameter optimization
- IDSIA
- Ill-posed problem
- Image classification
- ImageNet Large Scale Visual Recognition Challenge
- Image recognition
- Image segmentation
- Integer
- Intel Xeon Phi
- Intersection (set theory)
- Java (programming language)
- Kernel (image processing)
- Kunihiko Fukushima
- L1-norm
- L2 norm
- Layer (deep learning)
- Locality of reference
- Long short-term memory
- Loss function
- Lua (programming language)
- Machine learning
- Mammography
- Mathematical biology
- MATLAB
- Matrix multiplication
- Maximum
- Max pooling
- Medical image computing
- Memory footprint
- Microsoft Cognitive Toolkit
- MNIST
- MNIST database
- Monte Carlo tree search
- Multilayer perceptron
- Multinomial distribution
- Multiple sclerosis
- National Health and Nutrition Examination Survey
- Natural language processing
- Natural-language processing
- Neocognitron
- Nonlinear filter
- Nonlinearity (journal)
- NumPy
- Nyquist–Shannon sampling theorem
- Object detection
- Orbital hybridisation
- Organisms
- Overfitting
- Partition of a set
- Per-comparison error rate
- Precision and recall
- Proportional hazards model
- Protein
- Python (programming language)
- Q-learning
- Real number
- Receptive field
- Recommender system
- Rectifier (neural networks)
- Recurrent neural network
- Recurrent neural networks
- Region of interest
- Regression (machine learning)
- Regularization (mathematics)
- Reinforcement learning
- Retina
- RGB color model
- RGB images
- Root mean square error
- Safety-critical system
- Salience (neuroscience)
- Scala (programming language)
- Scale-invariant feature transform
- Scientific computing
- Self-driving car
- Semantic parsing
- Sigmoid function
- SIMD
- Simple cell
- Softmax function
- Sparse approximation
- Sparse network
- Spatial locality
- Stanford University
- Stride of an array
- Structure-based drug design
- Syllable
- Symmetry
- Tensor
- TensorFlow
- Tensor processing unit
- Text-to-Video model
- Theano (software)
- Three-dimensional space
- Time delay neural network
- Time series
- Torch (machine learning)
- Torsten Wiesel
- Training
- Transfer learning
- Translational symmetry
- Translation invariance
- Translation invariant
- Unsupervised learning
- Vector addition
- Video quality
- Vision processing unit
- Visual cortex
- Visual field
- Visual spatial attention
- Visual system
- Visual temporal attention
- Yann LeCun
- Zero norm
- ZIP Code
- SameAs
- Convolutional neural network
- Convolutional neural network
- Convolutional Neural Network
- Evrişimli sinir ağları
- f9QB
- Konvoliucinis neuroninis tinklas
- Konvolutsiooniline närvivõrk
- m.0x2dbhq
- Mạng thần kinh tích chập
- Q17084460
- Rede neural convolucional
- Red neuronal convolucional
- Réseau neuronal convolutif
- Rete neurale convoluzionale
- Xarxa neuronal convolucional
- Згорткова нейронна мережа
- Конволуцијске неуронске мреже
- Свёрточная нейронная сеть
- רשת קונבולוציה
- شبكة عصبونية التفافية
- شبکه عصبی پیچشی
- 卷积神经网络
- 畳み込みニューラルネットワーク
- 합성곱 신경망
- Subject
- Category:Computational neuroscience
- Category:Computer vision
- Category:Neural network architectures
- Thumbnail
- WasDerivedFrom
- Convolutional neural network?oldid=1123805333&ns=0
- WikiPageLength
- 117574
- Wikipage page ID
- 40409788
- Wikipage revision ID
- 1123805333
- WikiPageUsesTemplate
- Template:Citation needed
- Template:Clarify
- Template:Example needed
- Template:Lang-en-GB
- Template:Machine learning
- Template:Main
- Template:More citations needed
- Template:More citations needed section
- Template:Other uses
- Template:Reflist
- Template:Rp
- Template:Short description
- Template:TOC limit
- Template:When
- Template:Which