This site may earn affiliate commissions from the links on this page. Terms of employ.

Science-fiction authors and modern engineering mega-corporations agree on ane thing–artificial intelligence is the time to come. Everyone from Google to Facebook is designing artificial neural networks to tackle big problems like computer vision and speech synthesis. Near of these projects are using existing calculator hardware, but Intel has something big on the way. The chip maker has announced the showtime dedicated neural network processor, the Intel Nervana Neural Network Processor (NNP).

A neural network is designed to procedure data and solve problems in a manner that's more than like a brain. They consist of layers of bogus neurons that process inputs and pass the data down the line to the side by side neuron in the network. At the stop, y'all have an output that's informed past all the transformations practical by the network, which is more efficient than brute forcefulness computation. These systems can learn over time by training with large batches of data. This is how Google perfected the AlphaGo network that managed to defeat the best human Become players in the world.

The Nervana NNP is designed from the basis up with this type of computing in mind. This is what'due south known every bit an application specific integrated circuit (ASIC), then it'due south non useful for general computing tasks. However, if you're trying to run or train a neural network, Nervana could exist many times faster than existing hardware. Nervana volition exist adept at matrix multiplication, convolutions, and other mathematical operations used in neural networks.

Interestingly, there's no enshroud on the fleck like you'd discover on a CPU. Instead, Nervana volition utilize a software-divers memory direction arrangement, which can adjust performance based on the needs of the neural network. Intel has as well implemented its ain numerical format called Flexpoint. This is less precise than regular integer math, but Intel says that's no trouble for neural networks. They're naturally resistant to noise, and in some cases racket in the data can even help in training neurons. The lower precision too helps make the chip amend at parallel calculating, so the overall network can have college bandwidth and lower latency.

Intel is not alone in its quest to speed up neural networks. Google has developed deject-based silicon called Tensor Processing Units, and Nvidia is pushing its GPUs as a solution to neural network processing. Facebook has gotten on board with Intel'southward hardware and made some contributions to the design. Intel says the Nervana NNP volition ship by the cease of 2022.

At present read: What are neural networks?