Intel Nervana AI Chip Takes on NVIDIA for Machine Learning
/In an attempt to regain relevance in the quickly growing artificial intelligence and machine learning markets, Intel has just announced a new product family called Nervana, centered on a completely unique architectural design. The Nervana Neural Network Processor, or NNP for short, is targeting data center and enterprise customers that want to accelerate AI training times while offering superior efficiency and lower power consumption.
Intel’s competition in this market has been able to take advantage of high demand and incredible marketability to create brands and leadership. NVIDIA is the obvious winner in this space, using its graphics processor technology to make the initial leap in the machine learning fields and slowly optimizing its hardware and designs to target AI and deep learning more directly. Most recently, NVIDIA has added Tensor processors (known to be even more efficient for some machine learning algorithms) directly on to its graphics chips.
Other hardware providers like Apple, ARM, Qualcomm, and even AMD have been taking aim at the machine learning markets, from both consumer and enterprise perspectives, and have put forward products and roadmaps to address the future of workload demands. Google has gone as far as announcing its own TPU (Tensor Processing Unit) that will be at the heart of some of the organizations most important artificial intelligence designs, taking a drastic move to design its own chips.
Though Intel has often talked about the capability of its current server processor family, known as Xeon Scalable, for machine learning tasks, but most analysts understood that the existing x86 processor structure was not well suited for it. You could model and test AI and machine learning workloads, but the most power and cost efficient systems for high performance demands require parallel computing, a task that closely mirrors graphics and gaming workloads, giving NVIDIA the early advantage it now holds.
Intel purchased Nervana in August of 2016 with the promise of a new architecture that targets AI workloads. This new architecture includes some impressive technical changes such as a new software-defined memory design (allowing flexibility as algorithms and workload demands change), scalability enhancements to improve total calculation throughput, and even a new numeric format (different from standard integer and floating point) called Flexpoint that accelerates processing when utilized.
Previously known as “Lake Crest”, the Nervana NNP will not be a simple drag and drop solution for current software developers and systems, as the new architecture requires software to be specifically written and compiled for it. Intel claims to have been working with Facebook during the development of Nervana to “share their technical insights”, though details are sparse. In all likelihood, Facebook’s AI and machine learning expertise through years of software development helped lead Intel’s hardware engineering. It also means that Facebook should be the earliest adopter of the new Intel hardware when it becomes widely available.
Without performance claims or metrics from Intel or third parties, it is impossible to know just how much of an improvement Nervana offers for machine learning workloads over any existing hardware from NVIDIA, Google, or others. I would have thought if Intel had specific claims for the NNP over existing hardware, it would have made them during this announcement. Without that information, we have to assume that Intel is still in a developmental stage in its AI processor portfolio and that it expects the Nervana acquisition to pay dividends further down the roadmap.
Intel has not fared well when extending outside its core architecture and product lines in recent history. It attempted to build a discrete GPU called Larrabee that never saw production and was rolled into a parallel compute add-in product. Moves to displace Qualcomm with mobile-friendly chips resulted in only a handful of product design wins before Intel bowed out of the market. The move into IoT and compete with low-power options from ARM were met with tepid interest and has been canceled. The current drive to artificial intelligence compute work shares more in common with previous Intel market segments, and the financial necessity is such that it will persist and make an impact in the field.
For relevancy, Intel must continue to iterate and discuss its AI and machine learning compute plans but it needs prove performance advantages over competing giants like NVIDIA to make inroads.