The World’s Top Supercomputers
A new supercomputer recently took first place on the list of the world's top supercomputers. Zone Leader Jo Stichbury explains HPC and the Top500 listings.
Join the DZone community and get the full member experience.Join For Free
I recently wrote about the distributed volunteer computing network (BOINC), which can legitimately be called a supercomputer based on the combined potential of the number of computers powering it. In verifying where BOINC is placed in the supercomputer rankings, I became intrigued by the data and details of today’s most powerful computer systems.
This article describes the world’s top supercomputers at the time of writing (September 2018), based on data published earlier this year (June 2018). I will also explain a little about how they work, what they are used for, and how their performance is measured.
Within a few months, it is likely that some of the world’s best supercomputers will be superseded as others come online. Computer performance has grown exponentially over the past few decades, and the most powerful computers in the world are no exception. To put it into context, the world’s most powerful supercomputer during the 1990s, used by the Human Genome Project, possessed significantly less power than a typical smartphone has today. It doesn’t seem that long ago (it was when I was a student, so it can’t be!) but it is many generations away in supercomputing terms.
What Is a Supercomputer?
A supercomputer is defined by Wikipedia as a “computer with a high-level of performance compared to a general-purpose computer.” So far, so tautological. A supercomputer is simply whatever sits in the top tier of high-performance computing at that particular point in time. For example, the first supercomputer, released in 1964, was called the CDC 6600. It used a single processor to achieve 3 million calculations per second, which was powerful at the time but is thousands of times slower than today’s iPhone.
Supercomputers consist of tens of thousands of processors that work in parallel to solve different parts of a single larger calculation (a modern supercomputer design is known as “massively parallel”). The processors may be organized in a grid, where many computers work together although they are distributed (the BOINC project); more commonly, the processors co-exist in a centralized “computer cluster.”
Supercomputers are used for a wide range of tasks that need intensive computation; they can crunch data far faster and more easily than regular computers and are useful in fields as varied as quantum mechanics, cybersecurity, medical research, climate science, oil and gas exploration, and molecular modeling.
The World’s Top 5 Supercomputers
TOP500 has ranked the world's supercomputers since 1993. The listed started out informally—the ranking was presented to a conference in June 1993—but has been published twice per year ever since. The authors are Erich Strohmaier and Horst Simon, of Lawrence Berkeley National Laboratory; Jack Dongarra of the University of Tennessee, Knoxville, and Martin Meuer of ISC Group, Germany.
The data I quote below is taken from the most recent list, compiled in June 2018, where, for the first time since November 2012, the US reclaimed the most powerful supercomputer in the world. Besides the top spot for Summit, the US also claimed the number 3 spot with Sierra, interleaving with China at numbers 2 and 4. The top supercomputer in Japan takes position number 5, and just off the list, at number 6, is Europe’s top supercomputer (Piz Daint, which is installed at the Swiss National Supercomputing Centre).
How Are They Measured?
The performance of a supercomputer is measured in floating-point operations per second, or FLOPS. Modern supercomputers are sufficiently powerful that their performance is measured in TeraFLOPs (one trillion = 1012 or 1000000000000) and even petaFLOPS (1 quadrillion = 1015 or 1000000000000000). Exascale computing, which is on the horizon, is where the performance of the supercomputer is measurable in the exaFLOPS range (1 quintillion = 1018 or 1000000000000000000—or one million TFLOPS).
In reporting the world’s top 5 supercomputers, I have quoted the following reported data:
Rmax - The maximum LINPACK performance achieved
Rpeak - Theoretical peak performance
Number of cores
The ordering of the supercomputers is based on their Rmax value, but in the case of equal performances for different computers, they are ordered by Rpeak.
The LINPACK benchmarks measure the number of 64-bit floating-point operations — generally additions and multiplications — a computer can perform per second. These benchmarks are an approximation of real calculations since running actual applications is likely to achieve far lower performance than a LINPACK benchmark. However, the benchmark values are thought to be a closer reflection of typical performance than the peak values provided by the supercomputer’s manufacturer. The peak performance value is calculated as the maximum theoretical performance a computer can achieve (the machine's frequency, in cycles per second, multiplied by the number of operations per cycle it can perform).
In order of supercomputing magnitude, here are the top five (actually, the top five and a half):
Summit is located at Oak Ridge National Laboratory in Tennessee. It came online this year, and took the top spot on the Top500 supercomputer list in June 2018, despite having one-fifth as many cores and using half the power of its predecessor. Summit will be used in machine learning calculations for high-energy physics and human health.
Linpack performance (Rmax) 122,300 TFLOPS
Theoretical peak (Rpeak) 187,659 TFLOPS
Number of cores 2,282,544
Memory 2,801,664 GB
Located at the National Supercomputing Center in Wuxi, China, this was the world’s most powerful supercomputer until it was knocked from the top spot by Summit.
Linpack performance (Rmax) 93,014.6 TFLOPS
Theoretical peak (Rpeak) 125,436 TFLOPS
Number of cores 10,649,600
Memory 1,310,720 GB
Situated at the Lawrence Livermore National Laboratory, California.
Linpack performance (Rmax) 71,610 TFLOPS
Theoretical peak (Rpeak) 119,194 TFLOPS
Number of cores 1,572,480
Memory 1,382,400 GB
Located at the National Super Computer Center in Guangzhou.
Linpack performance (Rmax) 61,444.5 TFLOPS
Theoretical peak (Rpeak) 100,679 TFLOPS
Number of cores 4,981,760
Memory 2,277,376 GB
While not on the official Top500 list, I have positioned the distributed volunteer network, BOINC, in the list at this point. It reports a 24-hour average of 26,234 TFLOPS, averaged over 650,342 computers, which places it between 4th and 5th place in the supercomputer hall of fame.
Located at the National Institute of Advanced Industrial Science and Technology (AIST), Japan.
Linpack performance (Rmax) 19,880 TFLOPS
Theoretical peak (Rpeak) 32,576.6 TFLOPS
Number of cores 391,680
Memory 417,792 GB
If you’re interested in finding out more, check out the Top500 website, and stay tuned for the updated listings, coming up in November 2018.
Also in November is the SC18 conference, the International Conference for High Performance Computing, Networking, Storage, and Analysis.
If you’re in the UK, you may want to find out more about the different tiers of supercomputing available. I recently wrote about a presentation that covered these, and the first ARM-based supercomputer, which is coming to the UK later in the year. You can find out more here, and you may be interested in the HPC and Quantum Summit in London in February 2019.
Opinions expressed by DZone contributors are their own.