PC World had an interview with Diane Bryant, the head of Intel's data center business, you can read the full piece over here. Part of the conversation focused on GPUs and how Intel can compete in the machine learning market without a GPU. Bryant replied a GPU is just another type of accelerator and pointed out that Intel's Xeon Phi (Knights Landing) co-processor is well suited for this. Intel has no major clients in the machine learning market but Bryant commented that this market is still very small and represents less than 1 percent of all servers that shipped in 2015:
She concedes that Nvidia gained an early lead in the market for accelerated HPC workloads when it positioned its GPUs for that task several years ago. But since the release of the first Xeon Phi in 2014, she says, Intel now has 33 percent of the market for HPC workloads that use a floating point accelerator.
“So we’ve won share against Nvidia, and we’ll continue to win share,” she said.
Intel’s share of the machine learning business may be much smaller, but Bryant is quick to note that the market is still young.
“Less than 1 percent of all the servers that shipped last year were applied to machine learning, so to hear [Nvidia is] beating us in a market that barely exists yet makes me a little crazy,” she says.
There are some differences between Knights Landing and NVIDIA's GP100 though. NVIDIA's GPUs are harder to program and they still need to be paired with a regular Xeon to boot an OS because it's not self-booting like Knights Landing. However, NVIDIA's GP100 can provide 5 teraflops of compute power, versus about 3 teraflops for Intel's newest Xeon Phi.