Researchers at the Center for Turbulence Research at Stanford University set a new record by building the first supercomputer with over 1 million processor cores. The newly installed
Sequoia IBM Bluegene/Q system at Lawrence Livermore National Laboratories employs a whopping 1,572,864 compute cores (processors) and 1.6 petabytes of memory connected by a high-speed five-dimensional torus interconnect, and has been successfully used to solve a complex fluid dyanmics problem. Full details at Stanford.
“Computational fluid dynamics (CFD) simulations, like the one Nichols solved, are incredibly complex. Only recently, with the advent of massive supercomputers boasting hundreds of thousands of computing cores, have engineers been able to model jet engines and the noise they produce with accuracy and speed,” said Parviz Moin, the Franklin M. and Caroline P. Johnson Professor in the School of Engineering and Director of CTR.
CFD simulations test all aspects of a supercomputer. The waves propagating throughout the simulation require a carefully orchestrated balance between computation, memory and communication. Supercomputers like Sequoia divvy up the complex math into smaller parts so they can be computed simultaneously. The more cores you have, the faster and more complex the calculations can be.
And yet, despite the additional computing horsepower, the difficulty of the calculations only becomes more challenging with more cores. At the one-million-core level, previously innocuous parts of the computer code can suddenly become bottlenecks.