Intel takes a jab at AMD's patching together of small dies

Posted on Wednesday, October 03 2018 @ 15:18 CEST by Thomas De Maesschalck
INTC logo
In a new blog post, Intel fellow and chief architect Guy Therien reveals the chip giant will stick with monolithic dies for its high-core count processors. While AMD isn't specifically named, Therien suggests the monolithic approach is better than "patching together a group of small dies", like AMD does with its Threadripper and EPYC CPUs. He notes monolithic dies have lower latency, and offer less performance variability of workloads:
Intel Innovator: Guy Therien, Intel fellow and chief architect for performance segmentation in Intel’s Client Computing Group, leads a team focused on increasing the performance of the company’s client processors, through both design and manufacturing enhancements as well as new software optimizations.

How he’d describe his job to a 10-year-old: “I come up with ways to make new computers work noticeably better than older computers such that when you try a new one, you really want to take it home.”

The never-ending quest: A big part of Guy’s job — to increase the performance of Intel’s PC processors — is never done. He and his team are constantly “coming up with new things and squeezing the most out of current technologies in order to get maximum performance,” Guy explains. The many variables that govern the addition of a new feature — the time and cost to develop, the cost and layout of area on the chip, and overall desirability and longevity — make for “a fun puzzle,” Guy says.

Just add cores? Only two years ago, the top end of the consumer desktop processor market was a then-astonishing 10 cores and has increased since then. Which raises the question: Are more cores always better? For most consumers today, Guy says, the answer is no — for the simple reason that most applications, including many gaming titles and everyday office and productivity apps that the vast majority of people use are not programmed, or “threaded,” to make use of a high number of cores to deliver benefits to consumers. Analysis of client workloads shows that the lion’s share of today’s applications are not programmed to thread more than 10 cores. “We make products that aim at delivering the best experience and performance across PC segments, whatever computing needs people have – from gaming, content creation and high-end workstations needs,” Guy says. “So whatever number of cores are needed for different workloads consumers use, we’re going to provide them with best, fan-freaking-tastic cores to meet their needs.”

If more cores are not needed, “then actually it’s a negative to have a large number of cores.” More cores means more heat, Guy points out. At a set thermal envelope, or TDP, the performance ceiling of each core is lowered to keep total heat in check, slowing those typical applications.

Not all cores are created equal: Guy and his team invented an ingenious way of delivering high-quality cores and realizing the full performance of each of the cores. “Some cores have higher frequencies at the same voltages as others,” he explains, due to natural variability in the manufacturing process. Intel® Turbo Boost Max Technology 3.0, a feature within Intel® Core™ X-series processors, simply prioritizes applications that only need one or two cores onto those best-performing cores. That means the machine can handle demanding, multi-threaded applications, like editing a 360-degree video or rendering 3D effects, as well as lightly threaded applications, like everyday office apps with equal dexterity.

Go with the flow: Looking at the big picture, “We’re looking to provide people with optimal flow,” Guy says. The idea is that you should be able to run a constellation of different kinds of applications on your PC, and switch between them “without any hiccups. Anything that doesn’t work as expected is an interruption of flow.” User experience researchers and anthropologists at Intel are studying this “to understand the pain points that people experience,” Guy notes. “We’ll add features and capabilities to address those pain points over time.”

Up for the high-core chore: “There is a relatively small but important segment of the client computing that can utilize more cores to build advanced workstations for optimal performance and efficiency in workloads such as 3D rendering, simulation or 360-degree video. We will offer higher core count products to meet their needs, and we’ll continue to strive to be the overall performance leader. Our approach to deliver high core counts is to use monolithic dies instead of patching together a group of small dies. This approach has the advantage of decreasing latency you have heard of in high-core-count approaches. It also reduces the performance variability of workloads as this group of consumers won’t take any compromise and will really care about the consistent execution of workloads.

Winning, and helping users win: What keeps Guy going on the performance quest? “I am gratified to be part of a team that comes up with things that allow us to deliver the best possible performance, amazing platforms in every PC segment, giving people experiences that they didn’t have before,” he says. “It’s always fun to develop those great new features and capabilities that are high–performance, that allow you to set records and get great scores on benchmarks, but also have real-world performance for the workloads and applications that ultimately benefit people across all types of segments.”


About the Author

Thomas De Maesschalck

Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



Loading Comments