The Critical Role of Cooling for Data Centers
Valves, actuators, controls and pumps are the unsung heroes of managing the function of data centers, specifically when it comes to all the types of cooling systems.
#actuators #controls #ball-butterfly-plug
In today’s hyperconnected world, the digital infrastructure that supports everything from streaming services and financial transactions to industrial automation and artificial intelligence is anchored by a rapidly expanding network of data centers. And while we hear about all the cybersecurity threats lurking around every online corner, there are other physical threats to running these data centers, and one of the biggest is heat. With hundreds and often thousands of servers in one facility, the amount of heat they produce is significant and must be managed. Efficient cooling technologies are evolving all the time and the role that valves, actuators and controls play is crucial to their success.
Growth of data centers
Global data center demand is increasing at unprecedented rates, and the cooling systems designed to manage the heat generated hasn’t always kept up with demand, but that is starting to shift. The International Energy Agency estimates that, in 2022, data centers consumed an estimated 460-terawatt hours (TWh) of electricity, making up nearly 2% of global electricity demand. In North America, data centers are being built at hyper speed, growing in size and complexity.
As artificial intelligence continues to grow in usage both in consumer and business applications, systems are more intensive than ever before and designing effective thermal management when building them is critical. Even small inefficiencies can have a significant impact, not just on operation but on operating costs, as well as carbon emissions. So owners of these massive data centers are always looking for ways to optimize their systems while decreasing their overall spending. Cooling is an area that can have a major impact.
Types of data center cooling technologies
There is not a one-size-fits-all solution for cooling data centers. The location of the site, facility size, climate, energy costs and workload of processors all contribute to deciding which cooling technology or technologies to employ.
Air cooling
The most traditional and still commonly used systems are air cooled. These systems typically use computer room air conditioners (CRAC) or computer room air handlers (CRAH) to circulate chilled air around the server racks. CRACs are devices that monitor and maintain the temperature, humidity and air distribution. They are more efficient and more controllable than traditional air conditioning systems. Humidity control is very important as too little humidity can cause static electricity buildup that could damage the electronics. Too much humidity can create water vapor or condensation.
CRAC units are often set up on perforated, raised floor sections that form “cold aisles” to pump the cool air through the racks. Heat is then blown through the rear side of the racks and forms “hot aisles” before the air is returned to the CRAC intake to be recooled and redistributed. These systems run on refrigerant and require a compressor and a pump system and a series of valves to consistently circulate both the air in the room and the refrigerant in the system.
CRAH units use chilled water and control valves to circulate the air over cooling coils that are filled with chilled water. The warm air is returned through the CRAH unit and continuously recycled in the system. These systems do not require compressors and consume less energy than many CRAC units, so are often selected for these properties.. Butterfly and ball valves are used for shutoff and control valves regulate the water flow throughout the chilled water system. Actuators are often used to dynamically adjust flow rates based on conditions to optimize cooling while minimizing energy consumption.
In both types of systems, room controls that measure and monitor temperature and humidity can also be supplemented with robots that move through the data centers collecting measurements from a variety of points throughout the data center.
Liquid cooling
In high-density data centers, air cooling can’t always maintain the required temperatures. Liquid cooling is often used to absorb heat directly from the servers. The two main types of liquid cooling used frequently are direct-to-chip and rear door heat exchangers.
Direct-to-chip cooling circulates coolant through cold plates installed within the server racks and attached to high-heat components. Small diameter tubing is often used to carry the coolant, and must be of the highest integrity to ensure there is no leakage.
Rear-door heat exchangers (RDHx) are installed on the rear of the server racks. They are often used with air cooling systems. These systems require a chilled water system that sends water to a coolant distribution unit (CDU). Air cooling systems are used to push the heat from the server rack into the RDHx where it then goes through the chilled water system, and is constantly circulated.
Passive heat exchangers don’t have any moving parts, only the heat exchanger with water circulating through it that is directly mounted to the server racks. Active systems have fans mounted to the back of the RDHx that pulls heat from the server racks into the exchanger directly.
RDHx units tend to perform well at warmer chilled water setpoints so they can be more energy efficient than CRAC units. They are also less complicated in their design so require less maintenance than CRAC or CRAH units overall.
Liquid cooling systems require a variety of valves including globe valves and control valves, often proportional control valves that are paired [HR1] with smart controllers to work dynamically. Actuators are often used to ensure that the cooling system circuits open or close safely during unplanned power outages. Solenoid valves are also used for quick on/off responsiveness during emergencies or in backup systems.
Immersion cooling
The latest and most innovative cooling systems are immersion cooling systems, where servers are submerged in nonconductive dielectric fluids and heat transfer goes directly from the components into the fluid. This is highly efficient and is very useful for heavy computing applications such as artificial intelligence servers that require much more computing power. In single-phase systems, the fluid is pumped through heat exchangers as a liquid. In two-phase systems, the fluid comes to a boil from absorbing heat then condenses and is recirculated.
Because electronics are submerged in a fluid, these fluids need to be of very high purity, and must remain uncontaminated and completely controlled and contained. Diaphragm valves are often used to control the fluid, as well as ball valves that are compact and can be reliably operated and shutoff. Magnetic drive actuators are often used to prevent contamination as the actuator mechanism can be isolated from the fluid.
The brains behind it all: controls
Mechanical components and systems cool and circulate the fluids and cooling air in the system, but automation and precision controls are required to keep systems operating. Building management systems, programmable logic controllers and a variety of other control systems are used to monitor temperature and flow via sensors that manage real-time data monitoring. Valves and actuators are controlled to optimize temperature, flow and energy efficiency goals. All of these systems must also have redundancies and alert systems to indicate failures, and system readings outside set parameters for temperature, humidity, etc. Many data centers are built today using AI and computational fluid dynamics (CFD) systems to predict the future needs for cooling, flow, energy requirements and more.
In addition to the valves and actuators for each system, temperature and pressure measurement devices such as transducers provide constant feedback. Variable frequency drives are used to control coolant flow rates for actual demand in the pumps that are behind all of these systems, whether air or liquid.
Other considerations for design
The demand for new data centers is only increasing, and the density of these centers is growing exponentially. Some estimates are that cooling expenses of data centers alone accounts for up to 40 percent of the site’s total energy usage. A recent webinar presented by Black & Veatch reported that the movement to high-density data centers is driven by several trends including:
- The cost of land with access to power, infrastructure for fiber and cabling and access to water. The building of single-story data centers is being replaced by multiple story buildings to accommodate more server racks.
- The increasing demand for computational power and the ability of individual computers to process more data than ever in a smaller footprint.
- Smaller footprint data centers, due to higher density, will require even more cooling and power to support their operations, and is changing how data centers and cooling systems are designed.
- Traditional server racks were designed in the range of 5 kW to 15 kW required to run the servers. High-density, higher powered racks today often require 100-150 kW, with leading edge designs going as high as 1 MW of power required per rack. This requires larger feeder systems for power distribution and makes space requirements more challenging to fit the systems into the smaller footprints. Black and Veatch is looking to utilize superconductors to reduce the size of feeders. Traditionally, a 400-amp feed in conduit required 10-12 six-inch conduits. With a superconductor this can be done in one six-inch pipe, says Luke Platte of Black & Veatch.
- This large energy need is part of the drive for companies to explore small modular nuclear reactors to run off-grid and power individual data systems. Amazon, Google and Meta are just a few of the tech companies who have recently announced they are exploring SMRs to both power their own growing energy demands independently from the public utility grid and help them meet internal carbon-reduction targets.
What’s needed next
As all these factors converge, cooling systems will need to be more adaptive and continually more efficient and effective. Digital twins are being employed, along with CFD, to better estimate and plan for the needs of future data centers. Cooling systems are essential to support these data centers and valve, actuator, controls and pump manufacturers are critical suppliers for their operation. Ensuring performance and reliability of products will be key to winning new business in this ever-expanding market.
RELATED CONTENT
-
The Diverse Role Valves Play in the Chemical Industry
The chemical industry is extremely diverse with more than 60,000 known products. Like all process industries, the chemical industry needs valves designed for safe, efficient and reliable process operation.
-
The Final Control Element: Controlling Energy Transformation
When selecting control valves, be sure to properly evaluate the process conditions to identify potential issues and select the proper management techniques.
-
Valve Basics: Electric Actuator Controls
Electric valve actuators control the opening and closing of valves. With a motor drive that provides torque to operate the valve, these actuators are frequently used on multi-turn valves such as gate or globe and also on ball, plug and other quarter-turn valves.