Artificial intelligence (AI) policy: ASHRAE prohibits the entry of content from any ASHRAE publication or related ASHRAE intellectual property (IP) into any AI tool, including but not limited to ChatGPT. Additionally, creating derivative works of ASHRAE IP using AI is also prohibited without express written permission from ASHRAE.

logoShaping Tomorrow's Built Environment Today

LBNL’s High Performance Computing Center: Continuously Improving Energy And Water Management

By Jingjing Liu, P.E.; Norman Bourassa

Share This

©2020 This excerpt taken from the article of the same name which appeared in ASHRAE Journal, vol. 62, no. 12, December 2020.

About the Authors
Jingjing Liu, P.E. is a program manager and researcher at Lawrence Berkeley National Laboratory (LBNL) in Berkeley, Calif. Norman Bourassa works in LBNL’s National Energy Research Scientific Computing (NERSC) building infrastructure group.

High performance computing (HPC) centers are unique in certain aspects such as task scheduling and power consumption patterns. However, they also share commonalities with other data centers, for example, in the infrastructure systems and opportunities for saving energy and water. The success and lessons learned at LBNL’s National Energy Research Scientific Computing Center (NERSC) can be useful for other data centers with proper adoption considerations.

NERSC HPC Facility Today

Lawrence Berkeley National Laboratory’s (LBNL) HPC center, NERSC, has a mission to support U.S. Department of Energy (DOE) Office of Science-funded scientific research through providing HPC resources to science users at high-availability with high utilization of the machines.

NERSC has been located in Shyh Wang Hall, a LEED Gold-certified building, on LBNL’s main campus since 2015. The current main production system is Cori, a 30 petaflops high performance computing system. The facility consumes an average 4.8 gigawatt-hours per month. To track its energy efficiency, the NERSC team has implemented a rigorous 15-minute interval measurement of power usage effectiveness (PUE), drawing from an extensive instrumentation and data storage system referred to as Operations Monitoring and Notification Infrastructure (OMNI). So far, the team has achieved over 1.8 gigawatt-hours of energy savings and 0.56 million gallons (2.1 million L) of water savings annually. The current Level 2 PUE annual average is a very efficient 1.08, but the team is working on lowering it further.

Besides the data center’s facility infrastructure efficiency, the team also pioneered the use of an open-source scalable database solution to combine facility data with computing system data for important daily operational decisions, ongoing energy efficiency tuning and future facility designs.

There are many important reasons why LBNL gives NERSC energy efficiency significant attention and resources despite the relatively low electricity prices at LBNL:

  • NERSC consumes about one-third of LBNL’s total energy;
  • Energy efficiency requirements from federal law and the University of California, which is under contract to operate LBNL;
  • A strong lab culture of sustainability and environmental conservation; and
  • The compressor-free cooling systems at times require close attention to operating conditions and settings to maintain energy efficiency.

NERSC Facility Design

Before moving to its current home, NERSC was located at a facility in Oakland, Calif., that had an estimated PUE of 1.3. Designing a more efficient new facility on the main campus was a priority of LBNL management. One bold measure was to take full advantage of the mild local weather in Berkeley and eliminate compressor-based cooling, which is most commonly used for high-availability data centers.

The new facility is cooled by both outdoor air and cooling tower-generated cooling water. Because the installed peak compute power was 6.2 megawatts—only about half of the compute substations’ full capacity—air-handling units (AHUs) and cooling towers are optimally sized using modular concepts, with help from the lab’s Center of Expertise for Energy Efficiency in Data Centers (CoE), leaving space for future expansion. This saved tremendous cooling equipment cost.

Read the Full Article

ASHRAE Members have free access to the full-text PDF of this article as well as the complete ASHRAE Journal archives back to 1997 in the Free Member Access Area.

Non-members can purchase features from the ASHRAE Bookstore. Or, Join ASHRAE!

Return to Featured Article Excerpts

Return to ASHRAE Journal Featured Article Excerpts