logoShaping Tomorrow's Built Environment Today

ASHRAE Journal Podcast Episode 22

 ← All Episodes 

This episode is provided with support from Carrier.


Left, Bob McFarlane; Terry L. Rodgers; Marcus Hassen

Energy Efficiency in Data Centers

ASHRAE’s Manager of Codes, Emily Toto, and Assistant Manager of Standards, Thomas Loxley, speak with Marcus Hassen, P.E., Chair of SSPC 90.4; and Bob McFarlane and Terry Rodgers, founding members of SSPC 90.4; about data centers’ energy efficiencies as well as the committee's latest publication: Standard 90.4-2022.

To purchase and download Standard 90.4-2022: Energy Standard for Data Centers, click here.

To keep up to date with SSPC 90.4, click here.

To join the SSPC 90.4 e-mail listserv, click here.

If you are interested in joining a project committee, such as SSPC 90.4 Energy Standard for Data Centers, click here.

If you are interested in joining a technical committee, such as TC 9.9 Mission Critical Facilities, Data Center, Technology Spaces and Electronic Equipment, click here.

Have any great ideas for the show? Contact the ASHRAE Journal Podcast team at podcast@ashrae.org

Interested in reaching the global HVACR engineering leaders with one program? Contact Greg Martin at 01 678-539-1174 | gmartin@ashrae.org.

Available on:  Spotify  Apple Podcasts  Google Podcasts
Podcast Addict | And Other Podcast Players
RSS FeedDownload the episode.

Do you have questions or comments? Let us know!
  • Host Bios

    Emily Toto, Manager of Codes, ASHRAE

    Emily Toto is ASHRAE’s Manager of Codes and staff liaison for Standards 90.1 and 90.2. Before coming to ASHRAE, she worked in the pharmaceutical industry and taught high school science. Emily has degrees from Georgia Tech (BS Civil Engineering) and The University of Texas - Austin (MS Mechanical Engineering.) Proud mom to two fantastic kids and a growing number of animals - apologies to Mr. Toto - Emily is a firm believer in protecting our environment for future generations and proud to support ASHRAE in that mission. 


    Thomas Loxley, Assistant Manager of Standards–Codes, ASHRAE

    Thomas Loxley is a graduate of University of Kentucky (BS in Biosystems and Agricultural Engineering) and Auburn University (MS in Biosystems Engineering). He comes to ASHRAE with manufacturing industry experience and currently serves as Staff Liaison to Standing Standards Project Committees: 90.4 and 189.1. Thomas also works with teams of members to write guidelines for the Task Force for Building Decarbonization.

    As the Assistant Manager of Standards–Codes, he enjoys working with the wide range of professionals to write new standards for a brighter future.

    Thomas resides in Decatur, Ga., with his wife, daughter and dog.

  • Guest Bio

    Bob McFarlane has more than 50 years of experience in communications consulting, including data center power and cooling, cable design, audio, video, telephony, computer communications and acoustics.
    Bob’s expertise in the design of data center infrastructure is unmatched, and he is widely regarded as an industry expert in this critical, complex, and fast changing field. He teaches the Data Center Facilities course in the Marist College IDCP program, writes extensively on all aspects of the industry and is a popular speaker at numerous seminars.
    Additionally, Bob was one of the fifteen experts selected to develop the ASHRAE 90.4 Energy Standard for Data Centers, and continues this work as a voting member of the Continuance Maintenance Committee for this important Standard. He is also the editor and a principal writer of the ASHRAE Handbook Chapter 20 on Data Center Design and is a writer of several chapters in the 3rd Edition od Book #3 Design Considerations for Datacom Equipment Centers of the ASHRAE TC 9.9 Datacom series.

    Terry L. Rodgers has over 40 years of progressive experience in Critical Facilities operations and management including strategic planning, critical infrastructure design, operations and commissioning; business protection and recovery; preventive and predictive maintenance; and technical training development.
    Terry is an ASHRAE Distinguished Lecturer and member of ASHRAE TC9.9 "Mission Critical Facilities, Data Centers, Technology Spaces, & Electronic Equipment", ASHRAE SSPC 90.4 "Energy Standard for Data Centers”, ASHRAE SPC-127 "Method of Testing for Rating Computer Room Air Conditioners”, and GPC-1.6P “Commissioning Process for Data Centers”. He is on the Board of Directors of the 7x24 Exchange Carolinas Chapter and the Board of Directors of the Building Commissioning Association’s Southeast Region. He works with the Lawrence Livermore National Lab’s Energy Efficient High Performance Computing Working Group and has authored or co-authored books, whitepapers and presentations on Critical Facilities, facilities management and commissioning. He has developed and taught training programs in the design, construction, operation and maintenance of critical facilities including commercial nuclear power plants, aerospace facilities and large data centers.
    Terry has performed site reliability assessments for more than 70 sites in over 20 countries and 5 continents. Terry wrote a bi-monthly column in Mission Critical magazine called Sustainable Operations. Terry is currently the VP and National Leader for the Commissioning and Building Analytics business unit at Jones Lang LaSalle (JLL).

    Marcus Hassen is a seasoned technology leader with over 25 years of industry experience spanning management, engineering, construction, commissioning and operations with a focus on data centers and advanced technology sectors. He is a recognized data center industry leader with an ardent understanding of the nature of the critical environment and its reliance on the design, construction and commissioning of its technology and facility infrastructure systems and the proper training and readiness of its operations staff to successfully deliver on its fundamental business mission.
    Marcus has authored trade articles and served as an event speaker / panelist on design, energy efficiency, sustainability and operational best practices in the mission critical environment. He is an Uptime Institute Network Member Principal, contributor to the Data Center Dynamics Management and Operations Channel and Board Member of the Carolinas Chapter of the 7x24 Exchange, serving as the organization's Thought Leadership Committee Chairman.
    Marcus is a Voting Member and Chair of the ASHRAE SSPC 90.4 Energy Standard for Data Centers Committee. He is a graduate of Virginia Polytechnic Institute and State University and joined Truist in 2021 as Vice-President of Mission Critical Services.

  • Transcription

    ASHRAE Journal:

    ASHRAE Journal presents.

    Thomas Loxley:

    Hi everyone and welcome back. This is the ASHRAE Journal Podcast, episode 22. I'm Thomas Loxley, ASHRAE's Assistant Manager of Standards and Codes.

    Emily Toto:

    And I'm Emily Toto, ASHRAE's manager of codes. We're excited to kick off season four of the podcast with an episode covering data centers, energy efficiency and standard 90.4.

    Thomas Loxley:

    Joining us today are Marcus Hassen, Bob McFarland, and Terry Rodgers. Marcus Hassen is Chair of Standards Project Committee 90.4, and a VP at TRUIST from Charlotte, North Carolina.

    Marcus Hassen:

    Greetings, thrilled to be here.

    Thomas Loxley:

    Bob McFarland is a TC 9.9 board member and SSPC-90.4 founding member, principal of Shen Milsom & Wilke. Bob also serves as adjunct professor for Marist College.

    Bob McFarlane:

    So glad to be telling people about this relatively new but very important ASHRAE standard.

    Thomas Loxley:

    Terry Rodgers is an ASHRAE distinguished lecturer, a member of ASHRAE TC 9.9, and a founding member of SSPC 90.4. He is the Vice President of Commissioning and Building Analytics at Jones Lang LaSalle.

    Terry Rodgers:

    And hello all and thanks for the opportunity to discuss data centers and 90.4 today.

    Thomas Loxley:

    Gentlemen, welcome to the podcast. Let's get started with a quick summary to get listeners acquainted with data centers if they aren't already. A data center is a hub for storing, processing and sending out information. They can contain networks of computers as well as computing infrastructure. They're essential to every sector of our economy, but especially important for mission-critical facilities.

    Emily Toto:

    That's right, Thomas. When we say mission-critical facilities, we're thinking about everything we rely on and do not want going out of service. That's 911 call centers, fire and police dispatch, hospitals, networks that support national security and a lot of the banking we do and the internet in general. And by the way, ASHRAE technical committee, TC 9.9 covers all mission-critical facilities and therefore TC 9.9 overseas Standard 90.4. That’s what we’re talking about today. And Standard 90.4 specifically focuses on data centers, so we’re going to define that for our listeners real quick. A data center is a room or building that provides at least 10 kilowatts of energy to IT equipment.

    Thomas Loxley:

    But Emily, there’s so much more to know about data centers. Bob, I think most people know what their wifi router is and they know what a server is, but what makes data centers different?

    Bob McFarlane:

    Oh, there’s so many things, Thomas, but data centers, also called datacom rooms, really are very special places. They require design considerations that most engineers unfortunately are not really familiar with, and that's partly because they change so rapidly. All of the heat comes from the equipment. It's entirely sensible heat. There's no latent component, that's kind of an anomaly to most mechanical engineers. The cooling units used are known as precision air conditioners. They have very high sensible heat ratios. The environment's very different than comfort cooling. The occupants are the ITE, the information technology equipment. So we are looking for inlet temperatures to that computing equipment that ITE of around 27 degrees Celsius, which translates to 80.6 degrees Fahrenheit or higher. That makes the exhaust air from those pieces of equipment, which means the return air to the air conditioners, at temperatures up to 110 degrees Fahrenheit, that's 43 degrees Celsius or even higher than that.

    And now we have a very wide humidity range, 8% relative humidity, up to 70% relative humidity depending on conditions. We'll talk more about why that incredibly wide range later. But the real difference is the high power consumption. The power consumption of data center is 10 to 100 times the power density of a standard office building. Single cabinets can range from 15kW to 60kW or even higher where not all that many years ago, a 5kW cabinet was almost unusual. Today, 25kW may well be the new norm. Enterprise data centers therefore may run from 100kW to 10 megawatts.

    Now looking at efficiency, just a 1% efficiency improvement in a 10 megawatt facility equals 240 kilowatts of power per day or at normal billing rates 7 million or more annually. If we go to the hyper-scale data centers, the ones that are handling much of our cloud computing and Google and those kinds of facilities, those can reach a gigawatt of capacity.

    A 1% improvement in efficiency for one of these hyperscalers could be $700 million annually. Now, most data centers, as Emily mentioned, are classified as mission-critical and mission-critical automatically means redundant equipment. But redundant design and mission-critical design are not the same. How the redundant equipment is configured determines whether or not it will actually prevent failures. Too often a lot of money is spent on redundancy simply putting in duplicate equipment, but the actual design defeats its purpose. However, redundancy can create issues with energy efficiency. Therefore, a special standard is required for data center energy efficiency. Now, minimizing energy use has been a computing industry goal for many years. In 2007, an EPA report said that data centers used one and a half to 2% of all US energy and were predicted to double that usage in only five years, which was unsustainable. It was said that that would require 10 new power plants, which couldn't possibly be built in that timeframe. So that's what makes data centers just totally out of the normal realm of mechanical and electrical engineering design.

    Emily Toto:

    Bob, thank you for that thrilling explanation all about data centers. That just brings it into terms that I personally really never considered and it gets me thinking, what else can I learn about data centers or try and relate back to my everyday life? Because I hear megawatts, kilowatts and it sounds like a lot, but I'm trying to picture what kind of comparison could I make between data center energy use and a household item that I'm more familiar with and that's something that I use every day?

    Terry Rodgers:

    A typical oven that you would have in your household kitchen is probably between three and five kilowatts worth of heat when it's running wide open, when it's full on. So as Bob was mentioning, we have a rack of servers, a single rack that could have 25 to 30kW worth of load in it. So that right there is about six ovens in a single rack of equipment. A typical row, because we set all these racks in a row, could be as much as 10 racks in a row. So right there you're talking 60 ovens in a single row and that would come out to maybe 300kW of load. So if you have 10 rows, you're now at three megawatts worth of energy in a small data center.

    So the amount of energy that we use is extraordinary. As Bob mentioned, all of that energy is converted to heat, and so we have to reject that same amount of heat out of the building. So one challenge is to keep the power up and running 7 by 24 forever, but it's another that you have to continually get that heat out as well and if either fails, so it's not just the power, but if the cooling side fails, you're going to lose your data center in a hurry. So both the electrical and the cooling are considered close coupled to the load and the transients from a loss of utility, whether it's power or cooling, those transients are incredibly quick and often result in failures.

    Emily Toto:

    Based on what you said, Terry, it just seems like we have to be so careful in our energy considerations when we're looking at data center design. I know this EPA report from 2007 provided some real evidence about how much energy these data centers were really using across our country.

    Marcus, when that report came out in 2007, how did that impact the data center industry and where do you think we're going as we've entered now 2023, so many years after this first report was produced?

    Marcus Hassen:

    Yes, Emily, that's an important point and I appreciate the question.

    A brief history lesson I believe is vital in understanding the how and why of where we are at. For many years the industry was concerned chiefly with uptime. The emphasis was building in resiliency, right? Making the requisite investments into the supporting infrastructure. Lowering operational costs was secondary. This actually began changing in the mid-‘00s initially with tackling what we call the low hanging fruit in the critical environment elements such as separation of supply and return air, ceiling openings, modest increases in supply, air temperature, the inlet to the servers. Then the 2007 report dropped with its declarations of that phenomenal number of global electricity consumption being via data centers. That report was a catalyst, in my view, in that it prompted the industry and data center operators to be more diligent in pursuing energy efficiency and perhaps most importantly, recognized that availability and energy efficiency were not mutually exclusive concepts.

    The other byproduct of that report was that it served to place a bullseye on the industry for governmental entities such as the Department of Energy, assorted building code bodies, data center industry thought leadership groups, which were prompted to renew their research in this area and non-governmental organizations focused on sustainability. Some of your listening audience no doubt may have been in attendance at the notorious Uptime Institute's symposium in 2010 where Greenpeace was invited to present, let's say that did not exactly go well.

    In aggregate though these were vastly positive developments and with this confluence of events, it then became only a matter of time before ASHRAE would be compelled to get involved. Regarding a 15-year fast-forward a as you said, 2007 to 2023, the growth in data centers has continued unabated. Whether one views it through the lens of absolute growth or is some fraction of global energy consumption and the order of magnitude leaps we've seen in scale. Bob touched on some of this. It was not that long ago that a 50 megawatt build occurring over multiple phases of a master plan represented a large data center project. With the advent of the hyperscalers, a large build nowadays is built in the neighborhood of a gigawatt or more and in terms of energy efficiency though, this evolution has generally been a boon. That type of scale has the potential to dramatically reduce the unit of energy required to perform a unit of computer storage, which at the end of the day is you know what we're trying to accomplish here.

    Bob McFarlane:

    Yeah, the good news, Marcus, is that thankfully the dire predictions have not come true in spite of the fact that computer usage and our demand for data has, as you say, gone unabated because with the industry stepping up and making so many changes in ITE equipment design as well as in the design of the power and cooling systems for the data centers, we didn't have to build those 10 new power plants. We've managed to stay under the EPAs predictions. Now how much longer that'll last is a really good question, but the industry really has done an incredible job.

    Terry Rodgers:

    If I could add to that, ASHRAE has actually led a lot of the initiatives and the efforts that have resulted in us not continuing to follow that trajectory. It was ASHRAE's TC 9.9 as mentioned that developed the original thermal guidelines. It's now on its fifth edition and with each edition we have expanded the ranges and made free cooling opportunities more available. The IT industry started smart sizing their power supplies and other things to make their equipment more efficient, and the raised inlet temperatures has allowed for higher delta Ts, which has increased the heat transfer efficiency, et cetera. So the industry as a whole has met the challenge of improving efficiency, but it's not been done by any one group. It's been a consortium of ASHRAE, IEEE, the Green Grid, TIA, as well as the manufacturers of the gear that we use to cool the data centers, as well as the owners and the engineers who are designing more efficient sites. So there's been a lot of effort going into improving the overall efficiency of the industry.

    Marcus Hassen:

    If I could weigh in on that, I want to strongly echo that. A lot of this discussion thread and some of what we're going to cover to me is a case study for how ASHRAE and like-minded thought leadership organizations have successfully stepped up when faced with a development like this. I mean, we're talking an emergence of a whole new building category. If we were to say that 90.1 is akin to the energy efficiency constitution and it held court for what a good 30 years, but eventually all founding documents need amendments, right?

    I spoke of the low hanging fruit in the earlier segment and as Terry filled us in, it was ASHRAE with the formation of Technical Committee 9.9. That happened in 2002 and then TC 9.9 issued the industry's first thermal guidelines in 2004. Gradually others followed suit, the aforementioned Green Grid, Uptime Institute, you know the renowned owner operator network for data centers, which was founded on the altar of reliability and availability quickly became engaged in energy efficiency. United States Green Building Council began formulating, believe it or not, a LEED for data centers. The equipment vendor and consultant community, eager to cater to a rapidly growing and relevant industry began researching the issue, and we soon started seeing a torrent of white papers and products oriented to this aim, and I hope none of us lose sight of that.

    Bob McFarlane:

    And this illustrates, Marcus, how quickly the industry has evolved. If you look at some of the history, the concept of hot aisle-cold aisle, instead of cabinets just being arranged in rows facing the same direction came from IBM in 1992.

    Then there was the development of file containment, the introduction of the ASHRAE thermal guidelines that Terry mentioned. But the expansion of the humidity range came in 2005, and that was the result of ASHRAE research on static phenomena. We had been concerned for so many years about static discharge affecting and literally destroying information technology computing equipment. That research showed that we could go all the way down to 8% relative humidity in properly grounded rooms without any real concern about static discharge affecting the equipment. The difference that makes in energy use is enormous when you don't have to keep humidifying the room.

    Then we had gaseous contamination issues come up and we capped the maximum at 60% relative humidity later changed that to 50% or 70% depending upon contamination levels. We encouraged monitoring of dew point or absolute humidity. Now we're adding liquid cooling to the thermal guidelines. Introduction to the PUE metric, the power utilization effectiveness metric, by the Green Grid in 2006 that became a global ISO IEC standard in 2016. There are so many different things that have happened and they've happened so fast. It really is a little bit difficult to keep up with it all.

    ASHRAE Journal:

    Do you work in the data center industry? Are you a 90.4 user or designer? We welcome you to join us at our next committee meeting on March 31st and the annual meeting in Tampa, Florida in June. Please apply to join SSPC-90.4 by going to the ASHRAE website at ASHRAE.org/membership/join. We have new opportunities for volunteers to contribute to the standard by joining our mechanical, electrical or environmental and sustainability working groups. Stay up to date with SSPC-90.4 by visiting the 90.4 webpage and signing up for our listserv.

    Emily Toto:

    What's amazing about this group is you don't even need to be interviewed. What are we doing? You've answered the questions that we didn't even know we had and we've learned so much already. Thomas, I have a couple more questions. I don't know if you want to go back to anything that you had planned on asking, but I think this group of incredibly brilliant data center scholars already answered most of my data center questions. How about yourself?

    Thomas Loxley:

    I think I'm good. I would like to dig in a little deeper, I think on some of the environmental ranges on the data centers. Why is humidity an important factor and why is temperature such an important factor?

    Terry Rodgers:

    I can maybe help with that. In the original days of data centers, I'm going back to the '90s again. We had a lot of tape storage and a lot of devices that read metallic tape and these things were very susceptible to electrostatic discharge and also if it got too moist, then they would wear out the heads rapidly and they went for like $100,000 apiece. So these were major considerations. So it wasn't just a matter of losing data, but you could lose equipment because of that. Over time though, we got away from the tapes, though some still exist today and they would have some special requirements. You don't want to go to 8% relative humidity if you're still running tape drives, for example. But most of the IT equipment including storage, it's gone to hard drives and then into actually flash drives. So they don't even have rotating parts and they're just like computer chips, so they don't require those types of humidity conditions.

    But traditionally the data centers and some legacy data centers to this day still think that you have to manage the humidity between 40% and 55%, something like that. And ASHRAE found that that was an opportunity to expand that range so that people aren't unnecessarily dehumidifying or humidifying environments, which is a waste of energy. In addition to that, the ASHRAE TC 9.9, which has a IT subcommittee made up of the IT manufacturers and they got together and started taking their equipment into labs and testing it to see just how robust their equipment was to withstand the humidity ranges that we were predicting. And that that's ended up with the ASHRAE research that Marcus was mentioning, I believe, qhere we actually ASHRAE TC 9.9 through an RTAR and research approval got the University of Missouri, if I'm not mistaken, to do a study on how IT equipment would be affected by low humidity. And that's where we realized that with the proper grounding and without the tape drives, then you could go down as low as 8% relative humidity. So our guidelines are a little higher than that, we put some margin of safety in it, but the allowable ranges are quite great, and so basically by putting science to it, we came up with some bigger and better bands or ranges of values that you can operate in, which resulted in a lot of energy efficiencies.

    Bob McFarlane:

    A lot of this was not really based on science many years ago. It was based on, well, it wasn't quite old wives’ tales, but it wasn't too far from it. But the other thing that you didn't mention, Terry, was printing. In the old days of mainframes, there were these big impact line printers and not only did paper require definite humidity control, but the amount of dust contaminants that came out of those printers was incredible.

    Emily Toto:

    Do I still need to ground myself at the gas station? That's what I really needed to know.

    Bob McFarlane:

    It's really good practice for technicians who open equipment and their cases to use a grounding strap. It's just still good practice.

    Emily Toto:

    That's a good note on safety. We've heard a lot about how data centers have changed pretty drastically over time, and I'm wondering if that changes the way that data centers have looked personally. I think about the hallway, the very cold closet in the office hallway. I know you've likened that to a meat locker in the past and we've heard that that's changed. Has the actual look and feel and just architecture of the data center changed over time as well?

    Terry Rodgers:

    Absolutely. Irrespective of the IT equipment itself, the data centers have changed quite a bit. Initially they just put data centers in rooms and then eventually they said, well, this is all changing really quickly. We need to have a better way of managing it. So they came out with raised floors and then they could put the power under the floor and they could move floor tiles around so they could put the air where they needed it. And so it became a very flexible and adaptable environment. That then as we talked about, we just had all the racks in there facing the same direction and IBM came up with the hot aisle- cold aisle concept. So we started to reverse the direction of every other row so that you could put cold air in the middle and it would be pulled into the inlet on the two adjacent rows, which would then exhaust into hot aisles with the rows facing the exhaust into the same place.

    And that helped get us some significant hot air-cold air separation, which gave us higher delta-Ts, which improved the efficiencies and the capacities of the equipment that we were using. Ironically, nowadays, we've gotten away from the raised floors and I would say probably more than half the data centers built today are built right on slab and they have ducted cold air into the cold aisles and now we use containment so that air can't get out of the cold aisle other than to go through the IT equipment and then exhausted into the hot aisle. So we have real true hot air-cold air separation. They have physical barriers between the two, so that eliminated recirc at the rack level where hot air was getting pulled back into the cold aisle and it's eliminated recirc at the air conditioners where cold air just goes right back to the air conditioner. So it's made the whole cooling lineup extremely more efficient.

    So then we've actually gotten to where we're pushing the envelope on being unable to cool this equipment with air, even the cold air, quite honestly, kind of surprisingly, I've seen a proposal where they go with a new rated piece of IT equipment, it needs colder inlet air, now it's like a step backwards, which is actually going to push our movement further into the liquid cooled electronics, and that's where we actually take liquid, whether it's a refrigerant, a dielectric or water, some kind of liquid directly to the chips. Now of course, we're not putting water on the chips, so they have heat sinks within the chassis and we put the water to the heat sink, but basically you eliminate all the fans and all the air cooled equipment and you remove all of the heat out with liquid cooling, and that's being driven by the increasing density and performance and the need to cram more power into smaller spaces.

    Marcus Hassen:

    When we talk about change, I think we're all very aware that that's the constant, but what we see is those transformations only become more rapid and as a ASHRAE standard, as committee members, that's what our job is to stay in front of that. The average data center or even a definition of a prototype data center. Just kind of having Terry walk us through some of the variety in cooling system technologies, it's challenging to quantify what a prototype data center we would be. I would characterize it as a continuum though, we're all probably familiar with one end of that continuum. The proverbial closet that doubles is a space for networking servers in the typical office building, I suspect Emily, Thomas, you have one of these at 180 Technology Parkway, so we're on the same page here, but that continuum it's ranging and is varied as the large multi-campus data center builds that Bob touched on when he talked about the hyperscalers to what some are describing, associating with the edge data center, I'm surprised we're halfway into our discussion today and the edge data center hasn't been mentioned.

    And that could be a single rack located at the bottom of a cell phone tower. I think what's important here though is I would say all things considered 90.4 and 90.1 got it about right in attempting to assign some method to the madness, the 10kW threshold, 20 watts per square foot as a density, being the threshold between a computer room versus a data center. I think that's the appropriate way to break it down.

    All that said, it's more useful to look at it by business model type because we have seen some more clear direction there over the last 15 years where the industry is aligned across three fundamental models. I think we're all familiar with this enterprise colo and cloud, and while there can be dramatic differences in data center vitals across these categories, rack densities, favored cooling technology, system capacities. 90.4 in my view, ably addresses each of these models and it's staying abreast of the industry in terms of the business model evolutions is likely where 90.4 can best provide the most value and maintain its relevancy in the eyes of its constituents.

    Terry Rodgers:

    Like Marcus said, I think we picked a very reasonable line because we're not trying to manage every single rack in every little closet in every little building. 90.1 does that and does it very well. 90.4 is really geared towards when you get at least enough IT load and enough density that you have to start having different models, energy models, you need different HVAC equipment that that's all sensible cooling and has the delta-Ts that you're looking for, et cetera. So the end result is that 90.1 has actually incorporated 90.4 as an alternative compliance path and therefore married the two as related documents that interface with each other but still address the unique perspective of data centers and their need for different regulation.

    ASHRAE Journal:

    Introducing the new Carrier AquaSnap 30RC air cooled scroll chiller featuring Greenspeed intelligence. Designed for best-in-class energy efficiency and quieter operation within a tiered approach, the all-new 30RC offers a broader operating range and design flexibility. After all, confidence and superior performance across all types of environments is a big deal—even in small spaces.

    Bob McFarlane:

    Terry, you mentioned earlier the fact that we are now basing things more on science and we keep talking about these higher inlet temperatures of 27C or 80.6 degrees Fahrenheit to the servers. That was not just coming out of the air or that was not just because manufacturers said their servers could tolerate it, their servers can actually tolerate more. The reason is that number was picked is because that's the point at which most server fans start ramping up considerably. And of course we understand the affinity laws where if you double the speed of the fan, we're going to square the amount of energy used. So we're again looking at trying to minimize energy usage but still minimize the energy needed for cooling.

    Emily Toto:

    Thomas, what do you say we get into the details of 90.4?

    Thomas Loxley:

    Yeah, let's switch gears. I'm going to pose this question for the group since we have both HVAC and electrical professions represented here today. Standard 90.4 is a performance standard and not a prescriptive set of requirements. Can you all tell us what that means in terms of the designer and what they look for when faced with building a data center?

    Bob McFarlane:

    Well, there are just so many options available that prescriptive standards limit both innovation and the designer's ability to use the latest and most appropriate solutions for each situation. Let's look at the cooling options that we have available for data centers today. Terry mentioned under-floor and then going to overhead. We also have in-row coolers, your cooling units that intermingle with the cabinets in the row, self-cooled cabinets, liquid cooling options, direct to chip, rear row heat exchangers. Heat rejection can be a central chiller. Compressorized air conditioners, either condenser, water or refrigerant, dry coolers, air cooling towers, adiabatic or evaporated cooling are available. Any combinations of these can be used in designing a data center.

    So if you try to prescribe to a designer how to design the thing and what should be used, you've grossly limited their options to do it the best for the client, the best for the operation and the best for energy efficiency. Power also has multiple delivery options like modular UPSs, in-row power distribution, overhead power busway, and a plethora of in cabinet intelligent power distribution units that we call IPDUs. So with all these choices plus the heat density and the reliability demands, prescriptive standards just don't make sense. Only a performance-based standard is really usable.

    Marcus Hassen:

    To add on to Bob's point, I'm going to attempt a sports analogy, although as tortured as it may end up sounding. Baseball is a team sport, no? And to win the game, you need contributions from each position for sure, but baseball is unique as a team sport in that it also features a game within the game. Each play beginning with the result of the ultimate showdown between pitcher and batter. And I'm using the analogy because it's a similar dynamic if you think about it with data center design.

    In a data center, the systems are dominated by two systems: cooling system technology and the efficiency of the electrical distribution system. And that's why fixing the standard around a mechanical load component and electrical loss component is a logical construct. Going back to that pitcher/batter dynamic, yes, there's a lot of things involved in the MLC, it's all the mechanical loads, pumps, fans, motors, refrigeration equipment, but for the MLC, it is the selection of the cooling technology that influences so much of the rest of the mechanical design. And that same dynamic plays out with the ELC this time it's the electrical engineer's selection of electrical distribution equipment such as the UPS, which overwhelmingly drive how the ELC is going to play out.

    So I think what Bob walked us through why performance standard was the only route to go, that's only reinforced by the need to have flexibility in the designer's toolkit because the industry changes so rapidly and we heard a lot of that story, what we've discussed so far.

    Thomas Loxley:

    So Terry, do data centers frequently employ economizers in their HVAC equipment and if so, is there a particular type that's the most common?

    Terry Rodgers:

    Absolutely. Data centers are definitely seeking any opportunity to do free cooling and economizers are the primary way to do that. The two flavors would be airside where we bring a hundred percent outside air into the data centers and reject the air out. The other is through waterside economizers, similar that you would see in most commercial office buildings, et cetera.

    The big thing about economizers for data centers is that if they're not designed and controlled properly, they can actually add significant risk to not just the operations, but to the IT equipment itself. You don't want to have condensation occurring in the equipment, you don't want the equipment to overheat, but also you have challenges with change of conditions, rate of change. So if it heats up or cools down too quickly, that can thermally stress the IT equipment as well. There was actually some data centers that experienced some significant disasters trying to implement economizers and having failure scenarios and that was one of the big objections to ASHRAE 90.1 when they eliminated the data center exemption. The data center industry was like, whoa, wait a minute, this can be very harmful to us on a reliability standpoint, which of course trumps efficiency in most mission-critical facilities.

    Emily Toto:

    Can you talk to us more about the MLC and the ELC and how we go about calculating those values just in kind of a general sense and how they work together? Because you mentioned that you can use a trade-off strategy.

    Marcus Hassen:

    Yeah, Emily, thanks for steering us back to covering the core elements of this standard. I'll start with the MLC as Bob defined the acronym, the mechanical load component, and just imagine all of the HVAC equipment in the facility. We're talking refrigerated equipment fans, pumps, motors, drives, humidifiers, cooling tower fans, even down to the detail of rejecting the heat that is thrown off by the UPS modules. So the MLC models all of that, and based on equipment that's available in the industry, best design practices, it assigns a maximum value for the MLC and sets a baseline, much like ASHRAE 90.1 does for energy efficiency of a building.

    The one thing where MLC and the ELC vary is because cooling loads, cooling technology does vary by geography. The acceptable MLC values do vary based on climate zone, and that's all very well detailed in 90.4, which particular MLC max applies to which climate zone.

    Shifting to the ELC, which is the electrical loss component, and at first blush, well wait a minute, when we talk mechanical, we're saying load but electrical loss, why are we doing that? That is the fundamental of the electrical distribution system. What you're trying to do is manage a series of losses; the losses you see in a AC system, the various transformations, the I squared losses from running current through a conductor. That's the task there is managing the electrical distribution system from a loss standpoint.

    In 90.4, the ELC is modeled on the most common segments of data center distribution where the UPS segment is the most significant one as far as being able to manage the losses by way of the UPS module selections one makes. It also addresses, I think Bob talked a lot earlier about the inherent redundancy in data center systems where you have diverse power paths, and certainly that's the case here, but the ELC does not attempt to run every permutation or scenario. What it does is it selects the worst case and says whatever the highest loss, least efficient path that one takes, that is what you know you're going to have to assume as far as determining your ELC.

    Bob McFarlane:

    Yeah, and I'd like to mention Marcus, that when we were creating standard 90.4, there were objections raised to our using new metrics instead of the PUE, which everybody by then knew about. And there were even some articles written by people who unfortunately didn't talk to us in advance, criticizing that decision. But the PUE is a measurement metric. You have to have measurements of actual power draws in the data center to calculate the PUE. In design, there's nothing to measure as every engineer knows. So trying to predict a PUE would require thousands of calculations and it'll still be wrong. Even worse, owners would probably expect that PUE to be met in actual operation and we all know it would not. So the 90.4 standard is meant to enable designers to meet efficiency requirements. AHJs have no power to enforce how the facilities ultimately operated. So the new metrics, the MLC and the ELC make a lot more sense from a design standpoint.

    Terry Rodgers:

    Some of the air handlers that we're talking about, they're an entire floor of a building. Most people would think of an air handler as a piece of equipment that you walk up to and it has a casing and a bunch of stuff on the inside.

    In some of these data centers, these large ones, the data center is on the first floor. On the second floor is the mechanical cooling equipment. Basically you walk into the first room and on one wall is all outside air louvers and the floor is the grading where the hot air's coming up from the data center below, and that's your mixing box. You walk around that into the next room and the back wall is basically your filter bank. So you walk around that into the room between the two and you're now between the filter rack and a fan wall, and then you go to the next room and you're between the fan wall and the cooling coils, and then you leave that and go into next room and you've got the supply air being ducted back down to the data center or being released out the back of the building.

    So basically the entire second floor of these buildings is our air handlers. And so it's not something you can just go to a manufacturer and say, well, give me a cut sheet that shows me how efficient this is. So part of the challenge is being able to do calculations on expected efficiency for non-standard equipment, stuff that you can't even put in a lab and test. So there is no standard SEER-rating for instance, on something like that. You have to basically do a built-up unit, calculate each of the problems and losses, et cetera, and adjust it. So that's just an example of how data centers are so unique compared to a typical commercial office building.

    Emily Toto:

    And would you say the standard is more robust because it requires calculations at different percentages of the data centers overall capacity?

    Terry Rodgers:

    I'm not sure if robust is the right word, but it's definitely thorough. So we took into account that the load profiles may never even be met. Many data centers are built with a ultimate rating and you'll go in them 10, 15 years later and they never got to that rating so that they will never get beyond 50% or 75% of their total rated load. So actually the lower profiles, the 25% and 50% are very significant because probably data centers will run in that range most of their life.

    Bob McFarlane:

    And I'll mention also Terry, that on top of what you said, we do all of our calculations based on the ITE design load. Now the ITE design load for a UPS, for example, is usually about 80% of the UPS rated capacity. So if we've had 100kW UPS and an 80kW design load, we're not doing all of our calculations on 100kW, we're doing it on 80 and the percentages therefore take us down to lower and lower regions of the UPS, which means reduced efficiency. So we're really, as you said, trying to take a very practical and realistic approach.

    Thomas Loxley:

    I have a question for the three of you all because Standard 9.4 allows trade-offs between the MLC and the ELC. Would you say it's more flexible this way or is it more difficult because of all of the calculations that are involved?

    Terry Rodgers:

    In my opinion, it is very practical and it was basically written by people who are in the business of using these types of documents. So they are very familiar with 90.1 and what engineers have to do to get permitted drawing sets to get permission to build buildings. So another reason that we chose to use the trade-off concept and an overall efficiency requirement is because as we mentioned earlier, the trade-off, we have data centers are somewhat unique in that they're typically built for some ultimate load, let's say 10 megawatts or something. But on the day that they go live, when they've been completed, they've been tested and commissioned and they start operations, they may only have a very small percentage of that load available on day one. It could be less than 10% or so because we have all this redundant equipment, we can take advantage of the affinity laws and we can run the mechanical equipment in a very efficient manner at these very small loads.

    But unfortunately on the electrical side, they're typically very inefficient at these low loads. And so the trade-off helps you in that regard. And then as the loads increase over time, the electrical systems become more efficient and the mechanicals become less so. So the trade-off kind of works throughout the entire load profile and it will help get very good designs approved and built, which will probably not see the ultimate load for many years. So that was another big trade off and that's a reason why we require the compliance calculations to be done at 25%, 50%, 75%, and 100% of the design load.

    Bob McFarlane:

    That just emphasizes why 90.4 was developed as a design standard because the design industry just does not have all this information day one. The equipment that the user is running when the design starts may be different by the time the data center is actually built. Some of it may not even have been invented at the time the design starts because the equipment changes so rapidly in the IT industry, three years on average, some places even 18 months turnover.

    Terry Rodgers:

    So one of the big advantages 90.4 has was being a performance based standard that allowed ultimate flexibility on the engineer's part to come up with innovative solutions that must meet the minimum energy requirements that we set in data centers in the 90.4 standard. What I'd also mentioned is that the vast majority of data centers being built today will meet 90.4 with no problem. These are owners as we mentioned before, who are saving millions of dollars of energy for each 1% improvement in efficiency.

    What we're really targeting here are the legacy data centers and the smaller data centers or regardless, they're the people who are perhaps not as up on the available solutions that we have today, who continue to try to design data centers like they were, meat lockers, 20 years ago and are basically energy hogs and highly inefficient. So that was really the target that we wanted to make sure that we made them at least a minimum amount of efficiency, but the vast majority of data centers today will have no problem meeting 90.4.

    Bob McFarlane:

    I think another important point of those separations in trade-offs, Terry, is that upgrades to data centers and renovations are not always done in total. You may upgrade your UPS, part of your electrical system or you may have to add air conditioning. The standard allows you to do that as long as you take into account the efficiency of the existing equipment as well.

    Terry Rodgers:

    Yeah, I think we didn't mention the scope of 90.4 is not just for new construction and new buildings, but it applies to the renovation and expansion and update of existing facilities as well.

    Emily Toto:

    Thanks for bringing that to light, Terry. I want to ask Marcus another question about just kind of a look back from 2016 when the standard first published until now, where do you think 90.4 stands? Not only within ASHRAE and our family of standards, but just the industry overall?

    Marcus Hassen:

    As the new kid on the block, my view is the standard has acquitted itself relatively well, and believe me, I'm very dialed in to 90.4 and its relevancy and standing right? I think that's something Thomas can attest to with the tantrums I've been known to throw when an industry white paper comes out on the subject and conveniently fails to mention a reference 90.4, but it's an encouraging story in the space of that six and a half years, we've seen three editions. The 2022 edition just dropped what last week we've seen a standard and a standing project committee that has been responsive to a rapidly changing industry and the accompanying needs of its constituents. Just a couple things to illustrate that we've seen continuing refinements in both MLC and ELC, not only in each three-year addition, but via addenda to standards before they're updated for the three-year release.

    Twice we've tightened the UPS segment of the ELC as UPS systems became not only more efficient, but increasingly more efficient across a wider range of loading levels. And that's particularly important in the data center industry where due to the redundancy, you usually don't approach the high end of the curve where the systems have traditionally been most efficient. Incentives for heat recovery and onsite renewable energy. Those have been adopted as these sustainability strategies have gained favor in the industry. And in implementing this, there's been, in my view, phenomenal collaboration with both the end user communities and vendors in crafting these updates so they meet the needs of the industry. And then I'll throw out a trio of developments of considerable import as far as the standing in the industry and relevance: the state of Washington adopting 90.4 for data centers in July of 2020; international energy code adoption in 2021; and this was a big one, 90.1 adding 90.4 as an alternative compliance path for data center projects in their 2019 edition. I think that really, we had pushed open the door as a standard, but I think that we fully entered the room, if you will, with having that available to the design community.

    Terry Rodgers:

    If I could add to that, most states do not adopt the most current version of code. Many states are on a three-year delay, three-year lag. So the 2019 version is probably being adopted by many states in 2022, 2023. As much influence as 90.4 has had, it's really about to take much more influence as it gets adopted more and more by differing states.

    Marcus Hassen:

    And then Emily, to close on the original premise of your question, I think going forward, 90.4 committee is continuing to be committed to include diverse voices throughout the industry for input. I think that's illustrated by our key working groups, electrical, mechanical, but we also have marketing and newly formed ESG working groups. And this is very important to stay abreast of what we've characterized as increasingly rapid transformations in not only the underlying technologies in the industry, but also best practices and if managed effectively working groups, that's the gateway to bring in more industry participants and the essential diverse perspectives that are needed to keep a standard relevant. And I was particularly encouraged, we've seen ASHRAE in just the last 18 to 24 months is taking very bold steps at harmonization efforts across the ASHRAE ecosystem, particularly around building decarbonization. And that task force 90.4 was one of the committees that task force reached out to help and join that harmonization effort. So again, as the new kid on the block, we're doing quite well and there's a lot of important work ahead of us.

    Emily Toto:

    I completely agree. There's so much to look forward to with 90.4. Thomas, it's sad to say that we're going to have to wrap this thing up.

    Thomas Loxley:

    I know Emily, we've covered so much in a very short amount of time. Obviously I think we could go on for another two or three hours discussing all of the things in and around data centers and what the future holds for them. But today we covered all the basics of data centers and then we really dug deep into standard 90.4 and what 90.4 can bring to data centers to help make them energy efficient.

    Emily Toto:

    We just thank you so much, Terry, Marcus, and Bob for lending your expertise today and also just in general for making a commitment to ASHRAE throughout all these years. And could the three of you send us out with some closing remarks?

    Terry Rodgers:

    Sure. First, I want to thank you for this opportunity and putting all this together. I'd like to thank our audience for allowing us this opportunity to talk about data centers and 90.4, and I would encourage everybody to look at the February ASHRAE Journal where we have an article published on data centers and Standard 90.4.

    Marcus Hassen:

    Yeah, Thomas, Emily, many thanks for the platform and the invitation, and I would be negligent if I didn't encourage your listening audience to check out the brand new Standards 90.4 website and join us for our next committee meeting, which is happening next month.

    Bob McFarlane:

    And by all means, we've talked about technical committee 9.9, TC 9.9, and you can contribute to the guidebooks such as the thermal guidelines we've talked about by simply being involved in that incredible technical committee in ASHRAE.

    Thomas Loxley:

    Gentlemen, thank you so much for your time. It's been a pleasure.

    ASHRAE Journal:

    The ASHRAE Journal Podcast team is editor, John Falcioni; managing editor, Kelly Barraza; producer and associate editor, Chadd Jones; assistant editor, Kaitlyn Baich; associate editor, Tani Palefski; and technical editor, Rebecca Matyasovski. Copyright ASHRAE.

    The views expressed in this podcast are those of individuals only, and not of ASHRAE, its sponsors or advertisers. Please refer to ashrae.org/podcast for the full disclaimer.

Close