Artificial intelligence (AI) policy: ASHRAE prohibits the entry of content from any ASHRAE publication or related ASHRAE intellectual property (IP) into any AI tool, including but not limited to ChatGPT. Additionally, creating derivative works of ASHRAE IP using AI is also prohibited without express written permission from ASHRAE.

logoShaping Tomorrow's Built Environment Today

Optimizing Cooling Performance of a Data Center Using CFD Simulation and Measurements

Share This

Small Banner 700 x 255_Radmehr.jpg

©2018 This excerpt taken from the article of the same name which appeared in ASHRAE Journal, vol. 60, no. 7, July 2018.

By Amir Radmehr, Ph.D.; John Fitzpatrick; Kailash Karki, Ph.D., Member ASHRAE

About the Authors
Amir Radmehr, Ph.D., is a member of the technical staff at Innovative Research, Inc. John Fitzpatrick is the director of enterprise data centers at University of Rochester. Kailash Karki, Ph.D., is a member of the technical staff at Innovative Research, Inc.

In this article, we present a case study that combines computational fluid dynamics (CFD) modeling and measurements to evaluate the cooling performance of a raised-floor data center. To improve the cooling efficiency, we propose enhancements such as equipping the blowers of computer room air-handling (CRAH) units with variable frequency drive (VFD) electric motors, adjusting the speed of the blowers to maintain a certain pressure below the raised floor, and increasing the temperature settings of the CRAH units. These enhancements were evaluated and fine-tuned using CFD modeling. After their implementation, the temperatures of the racks and energy consumption of the data center were monitored for several months. This data showed that the inlet temperatures of the racks stayed below the ASHRAE-recommended maximum value and the energy consumption of the data center was reduced by 58%. The cost of the enhancement will be recovered by the saving in operating cost over 1.5 years.

A large number of data centers are routinely overcooled, resulting in unnecessary increase in the energy consumption and operating cost. The reasons for overcooling include concerns, mostly unfounded, about the reliability of computer equipment, inability of the cooling infrastructure to respond to the changes in the data center, and lack of proper tools to get guidance for changes required to improve the cooling efficiency and to predict the effect of these changes. Several developments in the recent years have eliminated much of the rationale for overcooling. These developments include:

  • A better understanding of the effect of cooling-air temperature on the performance of servers
  • Availability of control systems on cooling devices
  • Adoption of CFD modeling for predicting airflow and temperature distribution in data centers

In this study, we took advantage of these developments to improve the cooling efficiency of a data center. We used CFD to identify the cooling issues in the data center and to evaluate various enhancements. CFD modeling has been used widely in other industries since the early 1970s. It became popular for data center applications in early 2000. Now, it has become a standard practice in both designing new data centers and resolving cooling problems and inefficiencies in existing facilities.
We have used CFD simulations to propose changes in the data center and study the effect of these changes on cooling. For this simulation-based strategy to be successful, the CFD model must be validated. For this validation, we used measurements for the current (as-is) conditions in this data center. In an operating data center, there are uncertainties in the descriptions of certain inputs needed in the model. These measurements were also used to verify and fine-tune such input parameters.

The Data Center

The data center is a raised-floor space, with floor area of approximately 750 m2 (8,000 ft2), located in Rochester, N.Y. At the time of the study, the data center housed 175 server racks positioned in the hot aisle-cold aisle arrangement. The total IT heat load in the data center was 320 kW (1,088 kBtu/h). The space was being cooled by eight down-flow, chilled-water CRAH units working at 100% fan speed.

The data center does not have a drop ceiling; therefore, the hot air returns to the CRAH units through the room. However, extension ducts are installed at the return side of the CRAH units to pull in hot air from regions closer to the ceiling, preventing this air from reaching the racks. Perforated tiles with 25% open area equipped with dampers were used to deliver the airflow to the racks. For perforated tiles in front of racks with little or no heat load, the dampers were closed.