Intel has unveiled its Nervana Neural Processor family (NNP) at the WSJDLive global technology conference and they will be ready for shipping by the end of 2017. The NNP family is not for the average PC but dedicated to Artificial Intelligence.
Tech giants including Google, Facebook, and Apple are shifting their focus towards staying ahead of the line in machine learning. Intel plans to not be left behind in this AI race and level with Nvidia. “We are thrilled to have Facebook in close collaboration sharing its technical insights as we bring this new generation of AI hardware to market,” said Intel CEO Brian Krzanich.
According to Intel, NNP uses high-capacity, high-speed, and high-Bandwidth memory which will provide the maximum level of on-chip storage and a hyper-fast access to memory. Nervana includes bi-directional high-bandwidth links, enabling interconnection between ASICs. Its hardware also features separate pipelines for computation and data management, so new data is readily available for computation.
Intel has adopted a different memory subsystem by eliminating the cache architecture and managing all the on-chip memory through software. The software will determine the allocation of memory. Introduction of NNP will bring significant changes in parallelization of neural networks while reducing the power required for computations.
“We have multiple generations of Intel Nervana NNP products in the pipeline that will deliver higher performance and enable new levels of scalability for AI models,” said Naveen Rao, the CEO and co-founder of Nervana, which was acquired by Intel in August 2016. “We are gonna look back in 10 years and really see this was a pivotal point in the history of computation that processors change to focus on neural networks,” he added.