It reads like a paragraph from a Philip K Dick sci-fi novel; high-performance computing (HPC) can perform quadrillions of calculations per second. Quadrillions is a word we seldom hear or even fully comprehend. But, here we are, HPC can achieve it, catapulting us into a world with groundbreaking inventions, innovations and complex calculations.
To place it into perspective, a laptop or desktop with a 3 GHz processor can perform around three billion calculations per second. While that is much faster than any human can achieve, it pales in comparison to HPC solution.
Supercomputers are probably the best known HPC solutions; they contain thousands of compute nodes that work together to complete one or more tasks. This is called parallel processing.
As mentioned, HPC is crucial across various domains, from scientific research to financial modelling and gaming development. For example in the financial sector HPC is used for virtually predicting market trends, involving the processing of vast datasets to identify patterns and insights.
In gaming, the demand for high-performance machines at home underscores the even greater need for robust HPC infrastructure for game development and rendering. The development of 4K and 8K content, whether for gaming or streaming services like Netflix, relies heavily on HPC to manage the enormous computational requirements.
A strong mind needs a body
Like Vision in Marvel’s Avengers saga, HPC needs a body or rather a data centre to function optimally. And building these data centres comes at quite a cost; this requires careful operational, financial, and technical consideration.
The above also a makes a case for organisations turning to hyperscale providers like Amazon and Microsoft, which provide HPC-as-a-service, allowing organisations to rent computational power on demand. It enables organisations to expand their HPC capabilities without significant upfront investments.
But for those who intend to go the HPC data centre route, the following should be carefully considered:
• Computing: This is the processing power required to execute complex calculations. It not only demands powerful processors, but also efficient interconnectivity to ensure seamless communication between computing nodes.
• Storage: HPC applications generate and manipulate vast amounts of data. Storage solutions should therefore be capable of handling massive datasets and providing quick access to information.
• Network: The network infrastructure is the backbone of HPC, facilitating communication between various components of the system. High-speed, low-latency networks are crucial for ensuring data transfer efficiency and minimising bottlenecks.
• Cooling facilities: The intense computational activities in an HPC environment generate substantial heat, necessitating advanced solutions such as liquid cooling and precision air conditioning. HPC data centres are power intensive, often requiring triple the power of traditional data centres.
Liquid cooling in particular is gaining prominence for its ability to cool high-power components such as processors and GPUs, reducing the overall thermal load on the system. This not only enhances energy efficiency, but also allows for more densely packed computing clusters, which is ideal for HPC
HPC and cooling in action
Schneider Electric, together with power and cooling expert, Total Power Solutions designed and delivered a new, highly efficient cooling system to help reduce the power usage effectiveness (PUE) of University College Dublin’s (UCD) main production data centre.
UCD’s data centre was originally designed to accommodate HPC clusters, and provides a platform for research at its university campus. Total Power Solutions and Schneider Electric replaced the existing data centre cooling system with the Uniflair InRow Direct Expansion (DX) solution. Schneider Electric’s InRow DX cooling technology offers benefits such as modular design, more predictable cooling and variable speed fans, which help to reduce energy consumption.
The solution at UCD includes 10 independent InRow DX cooling units, which are sized to the server load to optimise efficiency. The system is scalable to enable UCD to add further HPC clusters and accommodate future innovations in technology. This includes the introduction of increasingly powerful central processing units (CPUs) and graphics processing units (GPUs).
The InRow DX cooling units work in conjunction with UCD’s existing Schneider Electric EcoStruxure Row Data Centre System, and provides a highly efficient, close-coupled design that is suited to high-density loads.
Tel: | +27 11 254 6400 |
Email: | [email protected] |
www: | www.se.com/za/en/ |
Articles: | More information and articles about Schneider Electric South Africa |
© Technews Publishing (Pty) Ltd | All Rights Reserved