Mar 31, 2020 By Team YoungWonks *
Even as the world scrambles to fight the dreaded Coronavirus, all resources are being pooled in; it is perhaps not surprising then to see that the IBM Summit, the world’s fastest supercomputer (at least as of November 2019), is also at the forefront of this fight.
According to a CNBC report, the IBM Summit has zeroed in on 77 potential molecules that may turn out to be useful in the treatment against the novel coronavirus. It would be interesting to note that this supercomputer - based at the Oak Ridge National Laboratory in Tennessee - has a peak speed of 200 petaFLOPS.
What then is a supercomputer? And what does petaFLOPS stand for? This blog in its introduction to supercomputers shall explain these concepts and more…
What is a Supercomputer?
A supercomputer is a computer with a high level of performance in comparison to a general-purpose computer. The performance of a supercomputer is typically tracked through its floating-point operations per second (FLOPS). FLOPS a measure of computer performance, useful in fields of scientific computations that require floating-point calculations, aka calculations that include very small and very large real numbers and usually need fast processing times. It is a more accurate measure than million instructions per second (MIPS).
Since 2017, we have supercomputers that can carry out over a hundred quadrillion FLOPS, called petaFLOPS. It is also interesting to note that today, all of the world’s fastest 500 supercomputers run Linux-based operating systems.
History of Supercomputers
The US, China, European Union, Taiwan and Japan are already in the race to create faster, more powerful and technologically superior supercomputers.
The US’s first big strides in the field of supercomputing can perhaps be traced back to 1964 when the CDC 6600 was manufactured by Control Data Corporation (CDC). Designed by American electrical engineer and supercomputer architect Seymour Cray, it is generally considered to be the first successful supercomputer as it clocked a performance of up to three megaFLOPS. Cray used - in place of instead of germanium transistors - silicon ones that could run faster. Moreover, he tackled the overheating problem by incorporating refrigeration in the supercomputer design. The CDC 6600 was followed by the CDC 7600 in 1969.
In 1976, four years after he left CDC, Cray came up with the 80 MHz Cray-1, which went on to become one of the most successful supercomputers ever with its performance clocking at an impressive 160 MFLOPS. Then came the Cray-2 that was delivered in 1985, which performed at 1.9 gigaFLOPS and was back then the world’s second fastest supercomputer after Moscow’s M-13.
Supercomputers today
Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their system’s floating point computing power. Clocking a whopping 117 units out of the TOP50 supercomputers, Lenovo became the world’s largest provider for supercomputers in the year 2018.
As mentioned earlier, the fastest supercomputer today on the TOP500 supercomputer list is the IBM Summit. It’s followed by the Sierra, another American supercomputer with a peak speed of 125 petaFLOPS. Sunway TaihuLight in Wuxi (China), Tianhe-2 in Guangzhou (China), Dell Frontera in Austin (USA), Piz Daint in Lugano (Switzerland) and AI Bridging Cloud Infrastructure (ABCI) in Tokyo (Japan) are some of the other examples of supercomputers today. The US also dominates the top 10 list with five supercomputers while China has two.
Types of Supercomputers
The two broad categories of supercomputers: general purpose supercomputers and special purpose supercomputers.
General purpose supercomputers can be further divided into three subcategories: vector processing supercomputers, tightly connected cluster computers, and commodity computers. Vector processing supercomputers are ones that rely on vector or array processors. These processors are basically like a CPU that can perform mathematical operations on a large number of data elements rather quickly; so these processors are the opposite of scalar processors, which can only work on one element at a time. Common in the scientific sector of computing, vector processors formed the basis of most supercomputers in the 1980s and early ’90s but are not so popular now. That said, supercomputers today do have CPUs that incorporate some vector processing instructions.
Cluster computers refers to groups of connected computers that work together as a unit. These could be director-based clusters, two-node clusters, multi-node clusters, and massively parallel clusters. A popular example would be the cluster with nodes running Linux OS and free software to implement the parallelism. Grid Engine by Sun Microsystems and Open SSI are also examples of such clusters that offer single-system image functionalities.
Director-based clusters and parallel clusters are often used for high performance reasons, even as two-node clusters are used for fault-tolerance. Massively parallel clusters make for supercomputers where a huge number of processors simultaneously work to solve different parts of a single larger problem; they basically perform a set of coordinated computations in parallel. The first massively parallel computer was the 1970s ILLIAC IV; it had 64 processors with over 200 MFLOPS.
Meanwhile, commodity clusters are basically a large number of commodity computers (standard-issue PCs) that are connected by high-bandwidth low-latency local area networks.
Special purpose computers, on the other hand, comprises supercomputers that have been built with the explicit purpose of achieving a particular task/ goal. They typically use Application-Specific Integrated Circuits (ASICs), which in turn offer better performance. Belle, Deep Blue and Hydra - all of whom have been built to play chess - as also Gravity Pipe for astrophysics, MDGRAPE-3 for protein structure computation molecular dynamics are a few notable examples of special-purpose supercomputers.
Capability over capacity
Supercomputers are typically programmed to pursue capability computing over capacity computing. Capability computing is where the maximum computing power is harnessed to solve a single big problem - say, a very complex weather simulation - in as short a time as possible. Capacity computing is when efficient and cost-effective computing power is used to solve a few rather large problems or several small problems. But such computing architectures that are employed for solving routine everyday tasks are often not considered to be supercomputers, despite their huge capacity. This is because they are not being used to tackle one highly complex problem.
Heat management in supercomputers
A typical supercomputer takes up large amounts of electrical power, almost all of which is converted into heat, thus needing cooling. Much like in our PCs, overheating interferes with the functioning of the supercomputer as it reduces the lifespan of several of its components.
There have been many approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures. Makers have also resorted to steps such as using low power processors and hot water cooling. And since copper wires can transfer energy into a supercomputer with power densities that are higher than the rate at which forced air or circulating refrigerants can remove waste heat, the cooling systems’ ability to remove waste heat continues to be a limiting factor.
Uses/ Applications of Supercomputers
While supercomputers (read Cray-1) were used mainly for weather forecasting and aerodynamic research in the 1970s, the following decade saw them being used for probabilistic analysis and radiation shielding modeling. The 1990s were when supercomputers were used for brute force code breaking even as their uses shifted to 3D nuclear test simulations. In the last decade (starting 2010), supercomputers have been used for molecular dynamics simulation.
The applications of supercomputers today also include climate modelling (weather forecasting) and life science research. For instance, the IBM Blue Gene/P computer has been used for simulating a number of artificial neurons equivalent to around one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. Additionally, supercomputers are being used by governments; the Advanced Simulation and Computing Program - run by the US federal agency National Nuclear Security Administration (NNSA) - currently relies on supercomputers so as to manage and simulate the United States nuclear stockpile.
The vast capabilities of supercomputers extend beyond scientific research and complex simulations; they also hold immense potential in the realm of game development. By harnessing the power of supercomputers, game developers can create more intricate, realistic, and expansive virtual worlds, pushing the boundaries of what's possible in interactive entertainment. Recognizing the convergence of these fields, YoungWonks is at the forefront of tech education. Our coding classes for kids lay the groundwork in programming, while our game development classes for kids delve into the intricacies of creating immersive gaming experiences. By integrating knowledge of supercomputers into our curriculum, we're preparing the next generation to innovate and excel in a rapidly evolving digital landscape.
*Contributors: Written by Vidya Prabhu; Lead image by: Leonel Cruz