Intel shows the first Neural Network Processor (NNP)

Posted on Wednesday, October 18 2017 @ 11:09 CEST by Thomas De Maesschalck
INTC logo
Intel doesn't want NVIDIA to capture the entire AI market so the chip giant is rolling out a steady flow of AI news. Today, Intel revealed what it claims is world's first neural network processor (NNP). These chips are designed by Intel's Nervana unit and are specifically designed for matrix multiplication and convolutions. An overview of what makes the NNP special can be read over here.

The Intel Nervana Neural Network Processor is expected to ship before the end of the year. Multiple generations of this chip are in the pipeline, Intel says the goal is to improve AI performance by more than a factor of 100 by 2020. The NNP was designed from the ground up, and unlike GPUs it is free from limitations imposed by existing hardware that wasn't explicitly designed for AI in the first place.
As our Intel CEO Brian Krzanich discussed earlier today at Wall Street Journal’s D.Live event, Intel will soon be shipping the world’s first family of processors designed from the ground up for artificial intelligence (AI): the Intel® Nervana™ Neural Network Processor family (formerly known as “Lake Crest”). This family of processors is over 3 years in the making, and on behalf of the team building it, I’d like to share a bit more insight on the motivation and design behind the world’s first neural network processor.

Machine Learning and Deep Learning are quickly emerging as the most important computational workloads of our time. These methods allow us extract meaningful insights from data. We’ve been listening to our customers and applying changes to Intel’s silicon portfolio to deliver superior Machine Learning performance. Intel® Xeon® Scalable Processors and Intel data center accelerators are powering the vast majority of general purpose Machine Learning and inference workloads for businesses today. We continue to optimize these product lines to support our customers’ evolving data processing needs. The computational needs of Deep Learning have uncovered the need for new thinking around the hardware required to support AI computations. We have responded to this by listening to the silicon and designing a new chip for Deep Learning called the Intel® Nervana™ Neural Network Processor (Intel® Nervana™ NNP).

The Intel Nervana NNP is a purpose built architecture for deep learning. The goal of this new architecture is to provide the needed flexibility to support all deep learning primitives while making core hardware components as efficient as possible.

We designed the Intel Nervana NNP to free us from the limitations imposed by existing hardware, which wasn’t explicitly designed for AI.
This is not the only horse Intel is betting on. While most of the market is going to GPUs, Intel not only has this NNP but is also pitching its Myriad X, Xeon Phi, and FPGAs for certain use cases.

Intel Neural Network processor


About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments