Advertisement

Redefining how supercomputers are ranked

Times have changed and the problems have changed, and in response, Sandia National Laboratories has developed a new benchmark for supercomputers called high-performance conjugate gradients, or HPCGs

trinity-super-computer


By Brian Santo, contributing writer

There’s a new standard for measuring supercomputers that has scrambled the order of entries on the list of the fastest machines in the world.

If you don’t want to relent and agree that Tom Brady is the greatest of all time among quarterbacks (and who outside of New England wants to do that?), you can invoke a half-dozen different statistical approaches to argue that the real GOAT is Joe Montana, Johnny Unitas, Otto Graham, or any of a dozen other QBs. The GOAT among supercomputers, though? People get intense about that. The LeBron-versus-Jordan argument for the NBA GOAT is a lovefest in comparison.

For decades, supercomputer fans have used LINPACK to figure out which supercomputer was the fastest at any given point in time. But times have changed, the problems have changed, and in response, Sandia National Laboratories has developed a new benchmark for supercomputers called high-performance conjugate gradients, or HPCGs.

LINPACK is a pack age of lin ear equations to solve a set of dense matrix problems. The aim is to measure a computer’s floating-point rate of execution. The figure of merit is floating-point operations per second (flops/s). Supercomputers are measured in petaflops.

The test has been tweaked several times over the years. The most recent version, High-Performance LINPACK (sometimes referred to as HPL), was designed to accommodate distributed-memory (DM) computing. Through multiple iterations of LINPACK over the years, the OS switched from Fortran to C, but the basic nature of the tests remained constant — like previous versions of LINPACK, HPL relies on linear equations to solve dense matrix problems.

Because dense matrix problems are both difficult and common in the real world, LINPACK and HPL proved a good common benchmark for supercomputers for years and years. But “good” isn’t “perfect.” There are many variables in supercomputer performance, and LINPACK doesn’t weigh them all the same. The fantasy football version of the argument would be using total quarterback rating (total QBR) versus defense-adjusted yards above replacement (DYAR). When it comes to career total QBR, Aaron Rodgers crushes Brady, by the way.

It’s natural that the manufacturer of any system would tweak its product to do well in benchmark testing, and that’s what supercomputer makers have been doing. But some in the supercomputing industry complain that there are only a few people alive capable of continuing to optimize supercomputers for LINPACK benchmarking.

That may or may not be so, but the bigger issue is that LINPACK/HPL is no longer fully reflective of real-world problems.

In the last decade or so, there has been a significant change in the nature of the matrices — specifically the density of the matrices, or, to be even more specific, the lack of density. Until recently, most matrices were packed with different values — they’ve been dense. In the era of big data, the converse is increasingly the case. Now many matrices are filled mostly with zeros — they are sparse.

“The world is really sparse at large sizes,” said Mike Heroux, who wrote the original version of HPCG. Heroux is now director of software technology for the Department of Energy’s Exascale Computing Project. “Think about your social media connections: There may be millions of people represented in a matrix, but your row — the people who influence you — are few. So the effective matrix is sparse. Do other people on the planet still influence you? Yes, but through people close to you.”

A non-mathematician might assume that a sparse matrix is just a special case of dense matrix (or vice versa), but that’s not necessarily the case. From a mathematical perspective, using the same mathematical tools to solve the former as the latter isn’t the most efficient approach.

There is a high cost to both storing and calculating millions or even billions of matrix values, explained Sandia, and it shouldn’t be necessary to pay that cost when solving a matrix filled mostly with zeroes.

HPCG uses a different approach to solving sparse matrices. Sandia described it as a preconditioned iterative method for solving systems containing billions of linear equations and billions of unknowns. HPCG is iterative in that the program starts with an initial guess to the solution and then computes a sequence of improved answers. Preconditioning refers to the use of other properties of the problem to quickly converge to an acceptably close answer, according to Sandia.

Because of its increasing utility, HPCG is becoming increasingly popular. Even some of the developers of LINPACK have contributed to the latest version of HPCG.

At the time that this article was written, the most recent LINPACK Top 500 and the most recent HPCG list (only the seventh thus far) were both from November 2017. Fast is fast. From year to year, the supercomputers that appear in the Top 10 on one list tend to be in the Top 10 of the other. The order is almost always different, however.

On the most recent list, for example, the Sunway TaihuLight computer operated by China’s National Supercomputer Center is first on the LINPACK list, but the K Computer in Japan tops the HPCG ratings (the Sunway machine ranks fifth on the HPCG list). Four supercomputers in the U.S. rank in both Top 10s, but the top-rated U.S. machine in the LINPACK list is the Titan machine at Oak Ridge in the fifth slot, while in the HPCG list, the Trinity supercomputer at Los Alamos ranks third.

Which is best? Sunway TaihuLight or the K Computer? Might be easier to decide whether to draft Drew Brees or Carson Wentz for your 2018 fantasy football team.

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply