By Cameron Wynne
Equipment and infrastructure modernization is critical to running an efficient data center. The IT industry and computing demands are evolving rapidly alongside efforts to achieve greater sustainability. The growth of the data center industry has also begun to exceed the power generation industry – a convergence that means data center engineers need to do more with less, giving efficiency a center-stage focus. Businesses seeking to remain agile in this landscape must prioritize addressing operational performance in their data centers. These efforts include implementing technology to achieve measurable impacts to performance, cost, and energy utilization, as well as meeting the rising challenge of denser computing environments.
Data centers are among the highest consumers of electric power. IT systems – computer servers, data storage, and networking – consume the bulk of the power, and all that power is turned into heat. The most power-hungry non-IT component of the data center is the cooling system which maintains the IT equipment at their ideal temperatures.
Data center power and design features are susceptible to aging – both from the equipment’s ability to function optimally and when emerging technology renders prior technology obsolete. An intelligent combination of design and innovative technology is needed to reduce power consumption, more efficiently utilize power drawn, and eventually reach net zero emissions. Data center engineers can benefit significantly from updating their design architectures and evaluating how industry advancements are leveraged to reduce energy consumption and improve cooling efficiency.
Power
Power Usage Effectiveness (PUE) consumption performance is a helpful measurement used to determine how efficiently the data center uses its power resources. PUE is the ratio of the data center’s overall load to the IT equipment’s critical load. A lower PUE has a host of benefits: the data center is more efficient, makes the best use of its power without wasting resources, and costs less to operate. From a profitability standpoint, lowering the PUE also allows for even more IT equipment for the same power.
Lowering PUE doesn’t happen from one singular approach. Data center engineers integrate numerous techniques while continually monitoring and analyzing data to provide both large-scale and minute adjustments, improving their facility’s PUE. Balancing cooling reliance and improving containment strategies are examples of these techniques that contribute to better energy management. Yet when it comes to older data centers, purchasing and installing new and more modern equipment can remediate inefficiencies and decrease power waste in surprising ways.
Cooling Via Fans
Today’s IT equipment and servers run at very high temperatures, putting them at risk of overheating and failing. Data centers are designed to maintain an optimally cooled environment to minimize the IT equipment failure rate. Computer Room Air Handlers (CRAHs) blow cold air, pressurizing the space under raised data center access floors. Perforated tiles in thefloor in front of each cabinet selectively direct the cold air to the front of the IT equipment.
While the data hall under floor cooling has mostly stayed the same in recent years, expert technicians can make measurable efficiency gains by utilizing fan speeds to reduce energy consumption in the data center. For example, fans inside a CRAH unit, running at 100%, will use approximately five times the power of the same CRAH unit with fans running at 50%.
By adding more CRAHs with modulated fans, engineers can reduce their fan speeds and gain considerable power savings. When delivered at scale across the data center, these exacting calibrations improve PUE and measurably reduce the total power necessary to cool the data center.
Cooling Via Water
Water temperatures can be altered to reduce energy consumption, as well. Closed-loop water systems deliver cold water to CRAHs, resulting in cold forced air. Calibrating systems to handle temperature differences and fluctuations is essential to cooling efficiency.
Data center design engineers and data center operators work together to evaluate the weather characteristics of the data center location, the air temperature to be supplied to the critical IT equipment, as well as chiller and CRAH configuration. Optimizing the system as a whole leads to the ideal chilled water temperature,which will use the least energy for the least construction cost while providing all the cold air that the critical IT equipment will need.
Additional efficiency can be created by maximizing both air and water differential temperatures. Higher chilled water temperatures increases efficiency and reduces power usage on a bigger scale. Managing all equipment thresholds and implementing efficiency measures avoids over-cooling – a best practice for data center operations.
High energy consumption continues to be an issue in the data center, but organizations are moving to more energy-efficient systems to manage and reduce their power usage. By calibrating equipment, upgrading the data center with CRAH units to provide speed variable fans for increased efficiency, and calibrating the adjusted chilled water temperature to be in line with American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) standards will help data centers become more efficient and save electricity usage.
Cameron Wynne is Chief Data Center Officer at Element Critical. He can be reached at cwynne@elementcritical.com.