Artificial intelligence (AI) policy: ASHRAE prohibits the entry of content from any ASHRAE publication or related ASHRAE intellectual property (IP) into any AI tool, including but not limited to ChatGPT. Additionally, creating derivative works of ASHRAE IP using AI is also prohibited without express written permission from ASHRAE. For the full AI policy, click here. 

Close
logoShaping Tomorrow’s Global Built Environment Today

ASHRAE Journal Podcast Episode 49

 ← All Episodes 

<

Liquid Cooling in Data Centers ASHRAE Journal Podcast

Justin Seter, Associate Member ASHRAE, Dustin Demetriou, Member ASHRAE, David Quirk, Member ASHRAE, and Tom Davidson, Member ASHRAE

Liquid Cooling in Data Centers, Part 2

Join host Justin Seter, Associate Member ASHRAE, along with guests David Quirk, Member ASHRAE, Tom Davidson, Member ASHRAE, and Dustin Demetriou, Member ASHRAE, as they provide an update on the state of liquid cooling in data centers. Topics of discussion include research on liquid cooling resiliency, S classes and their importance, and standardization as it pertains to the rapidly evolving data center industry.

Have any great ideas for the show? Contact the ASHRAE Journal Podcast team at podcast@ashrae.org

Interested in reaching the global HVACR engineering leaders with one program? Contact Greg Martin at 01 678-539-1174 | gmartin@ashrae.org.

Available on:  Spotify  Apple Podcasts  
And other platforms.

RSS Feed

Download the episode.

  • Host Bio

    Justin Seter, Associate Member ASHRAE, is the Strategic Initiatives Director at DLB Associates. He has 20 years of experience in the data center industry and is prior chair of ASHRAE TC7.9 - Building Commissioning.

  • Guest Bios

    Dustin Demetriou, Member ASHRAE, is a globally recognized expert in the field of data center thermal management and energy efficiency. He is the Chair of ASHRAE TC 9.9 IT Subcommittee, an Accredited Sustainability Advisor by the Uptime Institute and an ASHRAE Distinguished Lecturer.

    David Quirk, Member ASHRAE, is the President and CEO of DLB Associates and an entrepreneur dedicated to solving complex challenges in the built environment, with over 25 years of experience in the mission-critical industry. A licensed PE in 48 states, Certified Energy Manager and LEED Accredited Professional, David has chaired ASHRAE TC9.9 and contributed to key industry committees.

    Tom Davidson, Member ASHRAE, is a Professional Engineer, and works as a Senior Mechanical Engineer at DLB Associates in New Jersey. He is currently a Corresponding Member of TC9.9, and a co-author of ASHRAE 1972-WS, a proposed research project related to liquid cooling resiliency and energy efficiency.

  • Transcription

    Justin Seter:

    All right, welcome everyone to the latest episode of ASHRAE Journal Podcast here, Liquid Cooling for Data Centers, part two.

    So if you haven't listened yet to Episode 44, which came out here in January of 2025, a lot of good background information there on the history of how the industry sort of got to where we currently are and some longer intros from our distinguished panel who we have back on the podcast today. So we're going to do super short intros today for the sake of getting right into the good stuff. So we'll just go around real quick.

    My name is Justin Seter with DLB. Been in the industry for over 20 years and have worked in data centers essentially the entire time.

    Dustin, you want to go next?

    Dustin Demetriou:

    Yeah, thanks Justin. As Dustin Demetriou, current chair of the ASHRAE TC 9.9 IT subcommittee. Also been in the industry for about a little over 15 years, really focused on both mechanical systems and as well as IT equipment.

    Justin Seter:

    Great. David?

    David Quirk:

    Hi. Yeah, Dave Quirk president and CEO of DLB Associates. Been in the industry over 25 years. I'm a past chair of ASHRAE TC 9.9 and a current voting member of that committee.

    Justin Seter:

    Great. Tom?

    Tom Davidson:

    My name's Tom Davidson, I'm a mechanical engineer at DLB Associates. I've worked in the data center industry for over 20 years. Been both a voting and corresponding member of TC 9.9 and set to join 90.4, the Data Center Energy Committee shortly.

    Justin Seter:

    All right, so let's jump right into it here. So this is going to probably publish here in June of 2025. So why two podcasts on this topic in six months? And the short answer is, man, is it changing fast.

    So we have a lot of updates for you today on publications, research that's been updated and then sort of current state of the industry and where we're headed. So thought it was timely to go ahead and jump back in here for an industry update. Let's talk about where we're at, current state. Dave, I'm going to pass to you first. What do you see as the pressing current updates, and what's missing and that kind of thing?

    David Quirk:

    Yeah, thanks Justin. You said it. We need to acknowledge a lot of things have changed. Not only in the short time since we did this last podcast, but just in general in the industry.

    And I think the statement goes more to the point of we as an industry need to really change our approach. We just brought clean rooms into the data center room, and figuratively speaking, and we need to figure out a different way of going about it in order to meet the level of rigor that's required within liquid to the chip TCS system.

    This takes me back a little ways when I think of the initial introduction of commissioning into the industry, I remember when everyone would ask, why do you need to do that? Isn't that the contractor's responsibility? And you fast-forward to today, nobody asks that anymore. Nobody challenges that and they understand there's a real need for that service and for that level of additional QA/QC that we have on project execution, here we are again, history repeats itself and it definitely is in this case.

    So I think there's a big gap right now on the guidelines and standards that we need specific to data centers. So it's not like any of this stuff doesn't already exist. As examples, there's various ASTM standards like A380 and A967, I think, that are for cleaning/descaling/passivation of stainless steel parts and piping.

    But what is lacking from those other industry standards, which have been out there for a long time and applied to other process applications is, what's the acceptable thresholds and the pass/fail criteria and how do we cheapen up the rigor of some of those standards to apply to the fast pace of the data center industry? And I'll call it, arguably, the higher cost pressure model of the data center industry.

    And that's really the part, I think that's the problem child right now. And what we've seen emerge is somebody kind of coalescing all of those industry standards and helping prescribe or define what are those acceptable thresholds and pass/fail criteria, and then how does the industry adopt that for a given circumstance when they may have multiple boundary conditions and multiple stakeholders. Or they may not if it's just a hyper sealer executing the project.

    So again, I'll recap, we've acknowledged that things have changed. We really got to go back to the early days of commissioning. We've got another version of that now in front of us.

    We need new guidelines and standards. And then last but not least, we need some more research. We've got a lot of gaps. The industry hasn't done this at scale for this type of application and industry before, so we're in a new territory, and there's some research that's needed to answer some questions that we otherwise can't do via some modeling. We need, we either do real world empirical data collection or modeling or a combination of both, and research is the way to get from here to there. So yeah, that's what I think is currently the state of things and where we have the gaps.

    Justin Seter:

    Well, I'm glad you brought up research because I won't steal your thunder here, Tom, but do you want to go ahead and give us the latest update on the most pressing research here related to this topic and the recent news and then maybe a little bit of deeper dive on what answers we're looking for out of that particular project, that kind of thing?

    Tom Davidson:

    Sure. Thanks Justin. Yeah, I'll provide a short introduction to a proposed ASHRAE research project. It's called Data Center Direct-to-Chip, Liquid Cooling Resiliency Failure Modes and IT Throttling Impacts. And then there's a semicolon and it's also Liquid Cooling Energy Use Metrics And and Modeling.

    So first, why was the research project proposed? Well, the main driver is that data center designers are being asked to design resilient and energy-efficient liquid cooling data centers for IT equipment for which certain key technical data, such as maximum allowable rate of rise, and the time constant associated with it processor throttling following a flow failure, are not yet available. So some research on this was published in 2018 by Binghamton University, using what I'll call medium density IT processors, which had a thermal design power up to, I think 160 watts. Processors today have power output up to 750 watts, so almost five times the power of the processors studied in the 2018 paper.

    So it's reasonable to expect some of the time constants have changed. And we obviously need to design for that. Now, one significant change that will need to be made to the work statement before it goes out, the bid is based on very recent change in what's called the ASHRAE S classes. And I would say within the last month the range which was S-30 to S-50 has expanded to S-20 to S-50. Now the 20 in S-20 refers to the temperature and degrees centigrade of the inlet temperature to the IT equipment. So S-20 refers to 68 Fahrenheit, while S-30 corresponds to 86 Fahrenheit. That's the difference of 18 degrees Fahrenheit or 10 degrees C.

    So you can see this could have a big impact, and we're planning to model, actually both model and test, the impact of variation in S classes on both the rate of rise and the time to it throttling upon failure of liquid flow to the server.

    In terms of research proposal progress within ASHRAE. I also have some good news to report. Tthe status of the research project has progressed and it is now conditionally approved. I don't actually have a list of the conditions, but typically reaching this conditional approval stage allows the project to obtain final approval by working with the assigned ASHRAE research liaison rather than requiring full approval of the research committee. So we're optimistic that we can get the project put out to bid, obtain a PI, and start getting answers to the questions that we're asking in this proposal, in the next 12 to 24 months.

    Justin Seter:

    Great. Man, that hits on a whole bunch of the topics that we were wanting to hit on this call. So thank you for that overview, Tom. So Dave or Dustin, I'll let one of you guys take it. Talk to us a little bit about S classes and why this is really important.

    Dustin Demetriou:

    Yeah, sure Justin. So as Tom mentioned, we only recently had the approval of TC 9.9 to actually vote out the addition of the S-20 and S-25 classes that Tom just mentioned. And so maybe before diving into that detail, maybe it's worth spending a minute on why we have the S classes and why are we adding these additional classes.

    So I mean if you followed what the TC has done for years, probably remember back in, I think it was around 2011 working with the national labs, we introduced what were known as the W classes, or the facility water system classes, right? And these were meant to be analogous to the air cooling classes to provide the industry with a set of target temperatures that you could design around.

    And so fast-forward the 15 or so years since those have come out with the need we have right now on scaling liquid cooling to support artificial intelligence and all these graphic processing units, having just facility water system classes has really not been enough.

     And a lot of this has been driven by what has been the design point around liquid cooling. And so, historically, when vendors provided liquid cooling, they would also provide the coolant distribution unit, or CDU, along with the IT equipment. And so it made sense to specify a facility water temperature. Well, that paradigm is sort of changing as the coolant distribution unit really is becoming more of an infrastructure piece of equipment and less so a piece of IT equipment.

    And so, what the industry really needed was what are the temperatures that the IT equipment can actually accept or the inlet temperature to the IT equipment. And that was the reason for publishing the S classes. And those were only published, I think around June or July of last year. And here we are less than a year later or about a year later, already making some updates to those to add these lower temperature classes.

    So why do we need the lower temperatures? Well this is really stemming from the fact that, in order to support the industry needs and growth of artificial intelligence, we're really seeing more and more pressure to design these higher and higher power graphical processing units or GPUs. And really the thermal challenge there is many fold, but clearly power density is a big challenge. And as Tom mentioned, we've gone from a hundred watt processors to 750 watt GPUs to even higher power GPU devices.

    So the power density is one challenge. But the other thing that's quite interesting about these graphics processing units is they introduce a completely different paradigm from a chip packaging perspective. And so we're probably all familiar with the CPU where you have a big piece of silicon that's packaged together and you have a thermal interface material, and then a heat sink on top of that to remove the heat.

    Well these GPUs are many different pieces of silicon, some of them logic silicon like a chip, an application specific integrated circuit. But you also have this high bandwidth memory or HBM, that you're packaging in that same GPU package. And this combination of different types of silicon is really the driver for this thermal challenge that we have. So you get a lot of multi HBM stackings and multiple different things stacked on top of each other. So all increasing the thermal resistance.

    And so, it really becomes difficult to provide cooling because all of these devices are in thermal communication, they all have different heights that you're trying to accommodate with thermal interface materials. And really we've gotten to the point where even some of the most high performance cold plates and heat exchanger devices that we could build today really are not the driver of the thermal resistance. Really the materials and just this really complicated chip stack and packaging that's driving this.

    And so long way of saying that because of all these things that we're seeing to drive the performance of these GPUs, it's really pushing the temperatures down in terms of what can be supported because of the Delta T you have due to all those various thermal resistances that you see along the stack. And so that was really the driver that we've seen around needing to add those lower classes. It's really to be able to future-proof ourselves in the industry around the unknowns that we potentially will have here over the next couple generations which wouldn't be able to support those higher temperatures that we had been designing to in the past.

    And so, it gives you a little bit of view of what's going on inside of there, but it's clear that the days of cooling these high power density systems and chips with 40, 45 degree water or coolant may be behind us for some of those real high power density stuff.

    Justin Seter:

    Yeah, that's great perspective from the IT side for all the complexity of what all has to be cooled by that cold plate, and the fact that at some point the one kilowatt chip has already been discussed or designed or maybe publicly announced somewhere, so the density is going to continue to go up so it makes sense for the water temperature to come down. Dave, do you want to add in on that?

    David Quirk:

    Yeah, just to tie together a couple of things that both Dustin and Tom mentioned, there's really the four items for why the cooler temperature, so the performance of the chip to avoid the thermal throttling, the energy efficiency. So there is leakage power from the servers when you go up to in higher temperatures.

    This was pretty well published for air-cooled servers back in the day I think in the one or more of the ASHRAE publications we put out there. But the same is true for the liquid-cooled applications as well. I've seen some general rules of thumb of five to 7% of leakage power per 10 degree Celsius rise. I know, and Dustin can probably cite a couple sources that are out there. The third is the reliability margin or warranty of the hardware. You have an increasing failure mechanism per 10 degree C increase on the normal operation.

    And then there's just the headroom for the what-if scenarios. And the what-ifs are, back to the research project. While a lot of that stuff's designed to handle a whole bunch of thermal cycles and even fairly big swings in the temperatures, it doesn't help to add more insult to injury there than needed. And we do it quite often when we have interruptions of utility power.

    So most data centers have open transition electrical systems, so we have a couple of interruptions in our cooling systems, but most notably chillers. And you're going to get whatever that delta T is across that chiller is going to end up going down to the server level if you don't have some form of thermal energy storage, and controls in the mix to stop that. And some of those cycles may be acceptable.

    So it all depends on where you're operating in that S-class versus the hardware manufacturer specs and how hard do you want to push the boundary, and how much risk do you want to take on in terms of back-to-back utility outages and stuff like that. So that's really the culmination of, why are there safety factors on the cooling to the chips and why are we doing this research project? There's a lot of different competing priorities there that we're trying to balance as an industry that really matter.

    Justin Seter:

    Yeah, that's a great point. When you think about the different, sort of combinations of projects that could exist and for who the end user may be, you may know or not know some of that information at the start of the project. So you have to design and begin construction, and thinking about the resiliency and redundancy that's going to be built into the design of the system is really, ties into the specific IT that is eventually going to be in that room.

    And frankly to me it feels like our industry, the communication pathway from an IT manufacturer back to an engineer of record for design for a co-location facility, I'm not sure that that pathway exists currently. And so it sounds we're going to have to build it over the coming years if we're going to match the IT application with the true design of the building.

    David Quirk:

    Totally agree on that, Justin. That is the challenge. I think when we were dealing with just a homogeneous environment with a hyperscaler, you had the connection from the personnel working on the servers and the software, all the way out to the operations personnel and the facility personnel in between. So we didn't have as many barriers to the communication, and those barriers include legal barriers, not least of which that prohibit the translation of some of that information that really is important, that somebody doing the design of the facility for example, needs to know.

    And when we get into this COLO co-lo environment, it's introducing obstacles to some of the communication path. And so we've got a lot of unknowns now that we're having to add safety factor into the mix in order to make sure all the bases are covered. And that's for everybody involved in that chain.

    Justin Seter:

    Yeah, it's really, really tricky. And this might be a good time to plug a couple of the more recent Trae ASHRAE TC 9.9 technical bulletins that have been published. So since we met last here on episode 44 in March of 2025, ASHRAE TC 9.9 published a technical alert on the role of CDUs for cold plate deployments. And then going back probably six months behind that there was a resiliency sort of technical alert. So both of those are available on the ASHRAE TC 9.9 website, and I highly recommend that anybody who's engaged in this space be familiar with both.

    We want to spend a couple of minutes here talking about that CDU technical alert, Dustin or Dave?

    Dustin Demetriou:

    Yeah, sure. I can just give a brief overview of that technical alert. But I think keying on what Dave just said around moving from a homogeneous environment to really the environment we have today, where you may have more than one different manufacturer's equipment within a site. This CDU's critical role technical alert was really to touch on that topic. The fact that, unfortunately, back to standardization that we talked about a little bit earlier, standardization doesn't really exist for things like metarology or materials between, heat transfer fluids even in some cases.

    And this is not just between different manufacturers. There are examples where even within a given manufacturer you may have different requirements from a materials perspective, a fluid perspective, a flow rate, pressure drop, you name it, anything you need to use from a design perspective. And so, really the purpose of this technical work was to really have that conversation.

    If you're deploying these systems, you have to keep in mind that liquid cooling is not where you can just design to the least common denominator. It's not like, oh, well I'll design it to this system and then this system, well maybe it's just going to run a little hotter. This is really talking about reliability. Little things like chemicals and fluid versus different materials aren't just like, maybe it's going to run a little worse. I mean they could be catastrophic in terms of failure mechanisms and things like that.

    And so, that was really the point of that bulletin and it goes on and it talks about really the value and benefit of having really small TCS loops with, CDUs that are designed and maintained for those specific conditions. It talks about ownership, it talks about key things like condensation prevention, temperature-pressure control, but it starts to hit on some of the more critical things like understanding the blast radianceradius.

    So hey, you have a CDU that's providing coolant to many pieces of equipment. You really want to understand what happens from a availability perspective if something were to go wrong with that CDU, right? And how do we minimize that?

    I think we talked about in the other podcast, and Tom alluded to it, but when you look at the resiliency of these liquid cooling systems, I mean if you lose flow to even some of these lower or medium power processors, you're talking seconds of time before you will have adverse thermal effects equipment overheating. And so this is not like the data centers of air cooling where you have minutes of thermal capacity built into the system, right? You're talking about seconds.

    And so, minimizing that blast radius, being able to scale to support future density, etc. And so again, those are just some of the topics that are talked about in that bulletin, but as Dustin Justin said, available on the TC website for free. So really encourage people to take a look at that.

    David Quirk:

    So I'll just add to that Dustin. Really important point that has fallen out of that tech bulletin that's worth noting to the listeners. In the air-cooled world, we used to mix and match stuff in a given row or a given hackHAC, hot aisle containment system, all day long, especially when we were operating in enterprise or retail COLO co-lo type of environments. We mixed and matched server vendors, different rack vendors, just everything we mixed and match it all the way up and down the row.

    And really what this CDU tech bulletin and what the industry is saying now is you can't do that anymore. And so hello everybody, it's a really, really big deal with that. You have to think about the utilization of the data center white space and how big do you make that blast radius as Dustin said for the CDU technical cooling system loop.

    So I think we could see, in my view, a propagation of many smaller loops, or like dividing the rows in half and a bunch of other strategies that may emerge over time because we have to still figure out a way of mixing and matching stuff within these data halls in certain applications. So that's just one of those really big tricky things that's come out since we did this last podcast and hitting everybody broadside.

    Justin Seter:

    I think that's really important as you have a mixed IT environment and understand what's in there. You may have half of a row for example, that's sort of an AI inference application where the load is fairly steady responding to requests and that kind of thing. You may have something in the next row or in the same row that's doing AI training.

    And so you have big load steps that are associated with that. It may have 10% idle power and then all of a sudden go up to a hundred percent, which is equivalent to, we think of temperature spikes related to things like utility outages and we still have to consider all that, but we also have to consider the massive load steps that are involved with the AI application. It's just different.

    And so when you look at a mechanical system responding to that type of load step, it's a significant amount of thermal that you have to manage immediately.

    David Quirk:

    Yeah. And what is the acceptable load steps on the software side? FuelerBueller, fuelerBueller, anybody know? So there's no published industry guidance on this and so there's a lot of different talk of what everybody is proposing to do. But has anybody asked the lonely mechanical and electrical engineers out there designing the infrastructure? Mechanical systems, last time I checked, they don't like responding at the speed of light like electrical systems. And so there's great many number of challenges that are emerging that we're seeing in the real world nowadays because of that.

    And subsequently we're having to go back and tell everybody on the software side, hey, you can't do a 90% swing on the load like that. You're going to have to do something different with your software because we're up against physics over here.

    Justin Seter:

    It was certainly one thing when that was measured in single digit kilowatts per rack versus deep into the triple digits kilowatts per rack. All right, great. Well great updates there.

    Dustin, do you want to plug the encyclopedia here real quick and maybe I believe those S-25 and S-20 classes already published there. Do you want to just tell everybody where to go for that?

    Dustin Demetriou:

    Yeah, sure thing. You mentioned that on the last podcast, but for those that didn't listen or haven't followed, right, I mean TC 9.9, over its history, has been great at publishing these data comp books. And over the last year or so, the committee has really transformed all of that into an online datacom encyclopedia, Datacomdatacom.ashrae.org, you can go to. Again, this is everything the TC has ever published under one low annual cost subscription model.

    And so within that publication put out brand new guidance around liquid cooling and as we just mentioned, even within the last month or so here, even updated that liquid cooling publication with those S classes. So if you go there, you could go to, I believe it's chapter six, TCS Cooling Systems, with not liquid cooling publication, and you get the latest info there on the S classes, as well as some examples and use cases around when you might use a W class versus an S class.

    So I think key to this discussion is both of those classes still exist, that they are both very useful. The W classes when you're looking at the FWS system, the S classes when you're looking at the TCS system. And so I think they both exist and have their purposes for different use cases.

    One of the things that you'll talk about and we'll talk about in that one publication is that these are not meant to be forced approach temperatures for devices. I think this just opens up the aperture for our design community to really do the proper design, what makes sense for the customer from a cost perspective, from an operations perspective. But all that's published as well as those examples in the Datacom encyclopedia.

    Justin Seter:

    Great, and it's like $2 a month, so it's way cheaper than Netflix. So definitely go subscribe and then you'll always have the latest up-to-date online versions.

    So speaking of a little bit more on TCS and kind of where Dave started us in the intro here, maybe we'll talk a little bit about some of the TCS challenges that we're seeing in the industry right now. There are certainly a lot of projects that are, we talked about material science some and there's some that are, a lot of TCS systems are being designed as stainless steel. There are other examples where there are copper or some type of plastic piping. And so we talked about process industry practices and what are the gaps for data center industry practices. Maybe we can hit on that with a little more detail here. So Dave, you want to take the first pass at that one?

    David Quirk:

    Yeah, sure. My short version of the recent lessons learned is that the industry just is not ready, not ready, not willing, not able. And that's because we're trying to do these process piping applications like we've done all the other piping in data centers. We're treating it like the carbon steel chill water plants out in the yard and they're really not.

    You made the great analogy the other day that I absolutely love and that is, would you drink milk out of a TCS pipe when it's all said and done? And if the answer's no, then it's not ready. So you've really got to think of it with that mindset. And right now our industry's just not walking into it with that mindset. So I absolutely love the reference there for the TCS piping and the readiness.

    So I think it's a matter of the industry, not only adopting proven standards and processes that are already out there for these other industrial applications, but they're going to have to adopt them. There's no way that we're going to slow down data center construction to the rate of a semiconductor plant or a pharmaceutical plant or even some of the food and beverage plants that get designed and constructed out there.

    So how do we get that level of rigor in the quality without compromising on scheduling costs? And that's really the $64 million question when it comes to the TCS systems.

    So I think right now what we're seeing across the industry is everybody trying to do it brute force the way we did everything before, that's not working. And we know that going to the complete other end of the spectrum or the pendulum there, and adopting everything that the industrial world does is not practical in terms of costs and schedule. And so, we have to find the right balancing act in between.

    I will note that it's not far off from what we're already seeing, Justin, in the commissioning world. We're already seeing a divergence with a whole host of clients like redefining their own process on the commissioning front. And they're doing it because as we've seen and talked about before, when you look at the scale of the amount of load banks that you'd have to put on a liquid cooled application, it's unimaginable. It is just an unimaginable amount of time and money, temporary piping, et cetera, to make all that happen and you just blow the schedules and budgets on these projects.

    So I think ultimately what we're going to end up with at the end of the day is there's not going to be a single industry standard or guidance on this or guideline on this. I think we're going to see a whole slew of company-specific guidelines come out. But ultimately what groups like ASHRAE and OCP need to do is put out some general guidance to help the industry get there and answer their own application specific scenarios, be it a COLO co-lo environment, be a hyperscale environment, et cetera. I think everybody's going to develop their own path.

    Justin Seter:

    Yeah, it's definitely a wide range of solutions for complicated problems. And you mentioned OCP. I know that OpenCompute has been working on several work streams related to this, one being specifically related to the piping pre-commissioning preparation of the TCS loops systems. There's also a documented pre-production here, under the ASHRAE committee, that may get voted on here early this summer, something similar.

    So be on the lookout industry for more publications to help guide on that. But again, every application is going to be very, very different just because of the variability in the projects and schedules and budgets and all of that that we see right now.

    David Quirk:

    So I think we made a plug on the last one, but there is an OCP white paper out there already. It's called Modular Technology Cooling Systems for Cloud Scale Design and Delivery of Liquid to the Rack Distribution System. So there's a lot of great guidance in that document. I think it's a 40-page document that's out there, and there's more in the works as you mentioned.

    So I think all of these publications are going to inform the industry that, hey, you got to do something different. And I think they'll serve as a good wake-up call for take these and apply them as you see fit within your environment. I think again, we're going to see many different flavors of that emerge.

    Justin Seter:

    All right, well, so we've covered research recent publications by 9.9 And and the encyclopedia and what we're seeing currently here in TCS. So I guess we move into the next three to six months here before we talk to everyone again, what do you guys see as the top two items you want people to be on the lookout for here this summer, and what comes next?

    David Quirk:

    All right, I'll go first again here. So one would be, look for some more publications, important publications coming out of both OCP and ASHRAE TC 9.9. I think a lot of these industry lessons learned will find their way, in one shape or form, into these various documents. You have a lot of great industry representation through both organizations. So we become better as an industry as we get these publications out there. So I think that's one thing I see coming.

    The other is, a lot of re-wake-up calls. Unfortunately we still have an industry that has not acknowledged that this has been a really radical change and we have to respond in kind with the way we execute these projects. And that's not happening yet, from our view. We operate on both the design and the commissioning side of things. And so kind of get a wide cross section of what's going on there. And it's just moving really, really fast.

    Dustin can probably speak to it here, but the changes at the chip level and what's coming out on that end is going so quick, it's got everybody's head spinning. So cue the one megawatt rack please. All right, Dustin.

    Dustin Demetriou:

    Yeah, no, I hit it right on the head, Dave. Yeah, but I guess a couple of things that come to mind. One thing, and I think this will be important, right? We've talked a lot about Dave said the one megawatt rack and what that's driving in terms of S classes, that's 20 and 25.

    But I think the other thing to consider is, there may be applications still that are not those one-megawatt racks that will drive towards liquid cooling, and how do we not over-cool those things too? And so I think we'll be seeing some guidance on how you look at maybe multi-loop systems. So we shouldn't be thinking you're just going to have one FWS system that's going to support everything. I think that's maybe not the right approach from an energy efficiency perspective.

    So what are those use cases in different systems where maybe you have an S-20 system at an S-40 system to drive some energy efficiency?

    So I think that the industry needs some guidance there. I guess the other thing, and we'll see what comes up some of this, but I know this is going to be a big topic of conversation. We're headed into the June summer meeting here, and I know that IT subcommittee will be meeting on one of the major topics is around the materials and fluids, and how do we have some guidance around things like erosion velocities, and material chemistry requirements, and guidance and best practice. And while some of that exists today within the encyclopedia, I think there's quite a bit of work to do to make sure people really understand what all these things mean. We're not all chemists, we don't really all necessarily understand all these material and fluid properties.

    So how do we get this to a point where people could really start to understand this stuff and really understand what it means? I think, as Dave said, we're treating a lot of these things like they're our typical FWS systems, carbon steel systems, and the TCS is just a completely different environment and we have to get out of that mindset that we can just take all the things we've done and learned over years in FWS systems and apply them to TCS and think things are going to be okay. And so anyway, I think lots coming there from the IT subcommittee on some of those aspects.

    Justin Seter:

    Yeah, great. Tom?

    Tom Davidson:

    Sure. The only thing I'll mention, Justin, is as we talk about all the technical issues with liquid cooling, let's not forget about the energy. And the reason I mentioned that is that, whereas I think five months ago I said that the term liquid cooling did not exist in 90.4, which is the energy standard for data centers. It does exist now and calculations are coming through. There is a lot of pressure to reduce energy consumption. There's an addendum Addendum D to 90.4-2022. That's as of the time that we're recording out to public review, and it will probably have an additional round. So keep your eyes on that.

    Unfortunately, their meetings conflict with the IT subcommittee meetings, so you may have to pick and choose which one you go with, but just something to keep in touch with, because there is pressure to significantly increase the energy efficiency of data centers. And that could impact your entire system of how you would design either an air-cooled or a liquid-cooled system.

    Dustin Demetriou:

    And hey Justin, Tom just reminded me, I'll give one more plug. You only asked for two, but I'll give you a third, right?

    So you know SSPC-127, Method of Test for Data Center Air Conditioning Equipment, been hard at work that they spun up a liquid cooling subcommittee about a year ago, and that committee has been hard at work on putting together a method of tests initially for coolant distribution units. So that was out for first public review. Those comments have come back and the committee's working towards trying to get that out for a second public review here shortly and hopefully get an addendum on that published.

    I think to the standardization point, we don't even have methods of tests and rating points for some of this liquid cooling equipment. So I think that's a really important initiative within ASHRAE also.

    Justin Seter:

    Great. Thanks for bringing that up, Dustin. And I'll just make the formal plug here for the 2025 annual conference. June 21st in sunny Phoenix, Arizona, is when all of this goodness kicks off. So might be short notice when this podcast gets published, but join us if you can, and we could talk about this for many more hours than we have here today. So we'll close up this session of the ASHRAE Journal podcast and look forward to the next,

    David Quirk:

    Hey, Dustin.

    Justin Seter:

    Go ahead Dave. Sorry, go ahead.

    David Quirk:

    So just on our milk theme, I think it's really important to just add to that here for a moment, because I think it drives it home for everybody. When we first started ASHRAE standard Standard 90.4, the main reason was we were trying to bifurcate this process application of a data center from normal commercial buildings.

    And fast-forward to today, and we can finally make the claim that this really is a process application now. And so, the tie in with the milk theme is, it's analogous to baking a cake. And so if the recipe calls for baking the cake at 350 degrees, we're not going to try and bake the cake at 250 just because of an energy efficiency goal. The process doesn't work, the cake doesn't rise and nobody's happy. And so, you guys see my tie in there with the milk, right, and the cake?

    Okay. So the point here now with data centers is that we got to keep really focused on making the process work. And that's the hard part about all this right now. The industry is moving so fast, we're turning up so many more megawatts of all these, and we want to do it energy efficient, but we have to remember that if we don't get the cake to rise, none of that's going to matter. And so we're at this really critical inflection point in the industry of making the process work.

    And right now, we as an industry, we don't have that down and we kind of got to get focused on that first and then figure out how to optimize baking the cake.

    Justin Seter:

    All right, I think everybody's going to need to go get some cake now. Thanks for that, Dave.

    David Quirk:

    Thank you.

    Justin Seter:

    Well gentlemen, pleasure as always. Until next time, thank you for your contributions today and we'll go ahead and sign off. Thank you.

    ASHRAE Journal:

    The ASHRAE Journal Podcast team is editor, Drew Champlin; managing editor, Kelly Barraza; producer and assistant editor, Allison Hambrick, assistant editor, Mary Sims; associate editor, Tani Palefski; and technical editor, Rebecca Matyasovski.

    Copyright ASHRAE. The views expressed in this podcast are those of individuals only, and not of ASHRAE, its sponsors or advertisers. Please refer to ASHRAE.org/podcast for the full disclaimer.

Close