Supercomputers used in science are the most powerful machines. They make use of HPC (High Performance Computing) technologies, Graphics Processing Units (GPUs), computer clusters or clusters, and nodes or concentrators of processor cores, memory, storage units, and so on.
The advantage of this type of architecture is to offer massively parallel and modular systems whose treatments are distributed between the different specialized units in order to obtain a more efficient overall system than a single large equivalent system, for a relatively high price. lower (approximately € 100,000 for a machine of 1 PFLOPS).
These supercomputers offer a computation speed that is currently expressed in billions or trillions of floating-point operations per second, that is, in teraFLOPS (TFLOPS) or petaFLOPS (PFLOPS). Their RAM is calculated in thousands of gigabytes (in terabytes or TB) and their storage capacity on disk or tape in multiples of petabyte (PB)!
As can be seen, the performance of a super-computer of the latest generation is equivalent to that of hundreds of thousands of parallel PCs.
Why such power? To perform simulations and process data in a reasonable time or even in real time, a computer needs significant hardware resources: a microprocessor (and even thousands of microprocessors) to perform calculations and memory space (RAM and disk) to contain both its program and all of its data, variables and some results. The less it has physical constraints (CPU speed, memory space or disk, input-output bus) the less it will be limited by its resources.
The faster the computer is able to process big data, the easier it will be for programmers to use it to develop complex numerical models that take into account many variables.
The more complete and precise the numerical models are (more detailed and more granular), the easier it will be for users to simulate processes as close as possible to reality. Ultimately, projects and research will progress even faster than the system will deliver results quickly.