Maintaining Datacom Rack Inlet Air Temperatures With Water Cooled Heat Exchanger

Author(s):  
Roger Schmidt ◽  
Richard C. Chu ◽  
Mike Ellsworth ◽  
Madhu Iyengar ◽  
Don Porter ◽  
...  

The heat dissipated by electronic equipment continues to increase at a alarming rate. This has occurred for products covering a wide range of applications. Manufacturers of this equipment require that the equipment be maintained within an environmental envelope in order to guarantee proper operation. Achievement of these environmental conditions are becoming increasingly difficult given the increases in rack heat loads and the desire for customers of such equipment to cluster racks in a small region for increased performance. And with the increased heat load of the racks and correspondingly increased air flowrate the chilled air flow supplied either through data center raised floor perforated tiles or diffusers for non raised floors is not sufficient to match the air flow required by the datacom racks. In this case some of the hot air exhausting the rear of a rack can return to the front of the rack and be ingested into the air intake thereby reducing the reliability of the electronic equipment. This paper describes a method to reduce the effect of the hot air recirculation with a water cooled heat exchanger attached to the rear door of the rack. This heat exchanger removes a large portion of the heat from the rack as well as significantly lowering the air temperature exhausting the rear of the rack. This paper describes the hardware and presents the test results showing that a large portion of the heat is removed from the rack and the temperature exhausting the rear of the rack is significantly reduced. Finally the effectiveness of the solution is shown in modeling of this water cooled solution in a data center application.

Author(s):  
Husam A. Alissa ◽  
Kourosh Nemati ◽  
Bahgat Sammakia ◽  
Alfonso Ortega ◽  
David King ◽  
...  

The perpetual increase of data processing has led to an ever increasing need for power and in turn to greater cooling challenges. High density (HD) IT loads have necessitated more aggressive and direct approaches of cooling as opposed to the legacy approach by the utilization of row-based cooling. In-row cooler systems are placed between the racks aligned with row orientation; they offer cool air to the IT equipment more directly and effectively. Following a horizontal airflow pattern and typically occupying 50% of a rack’s width; in-row cooling can be the main source of cooling in the data center or can work jointly with perimeter cooling. Another important development is the use of containment systems since they reduce mixing of hot and cold air in the facility. Both in-row technology and containment can be combined to form a very effective cooling solution for HD data centers. This current study numerically investigates the behavior of in-row coolers in cold aisle containment (CAC) vs. perimeter cooling scheme. Also, we address the steady state performance for both systems, this includes manufacturer’s specifications such as heat exchanger performance and cooling coil capacity. A brief failure scenario is then run, and duration of ride through time in the case of row-based cooling system failure is compared to raised floor perimeter cooling with containment. Non-raised floor cooling schemes will reduce the air volumetric storage of the whole facility (in this small data center cell it is about a 20% reduction). Also, the varying thermal inertia between the typical in-row and perimeter cooling units is of decisive importance. The CFD model is validated using a new data center laboratory at Binghamton University with perimeter cooling. This data center consists of one main Liebert cooling unit, 46 perforated tiles with 22% open area, 40 racks distributed on three main cold aisles C and D. A computational slice is taken of the data center to generalize results. Cold aisle C consists of 16 rack and 18 perforated tiles with containment installed. In-row coolers are then added to the CFD model. Fixed IT load is maintained throughout the simulation and steady state comparisons are built between the legacy and row-based cooling schemes. An empirically obtained flow curve method is used to capture the flow-pressure correlation for flow devices. Performance scenarios were parametrically analyzed for the following cases: (a) Perimeter cooling in CAC, (b) In-row cooling in CAC. Results showed that in-row coolers increased the efficiency of supply air flow utilization since the floor leakage was eliminated, and higher pressure build up in CAC were observed. This reduced the rack recirculation when compared to the perimeter cooled case. However, the heat exchanger size demonstrated the limitation of the in-row to maintain controlled set point at increased air flow conditions. For the pump failure scenario, experimental data provided by Emerson labs were used to capture the thermal inertia effect of the cooling coils for in-row and perimeter unit, perimeter cooled system proved to have longer ride through time.


Author(s):  
Tianyi Gao ◽  
James Geer ◽  
Russell Tipton ◽  
Bruce Murray ◽  
Bahgat G. Sammakia ◽  
...  

The heat dissipated by high performance IT equipment such as servers and switches in data centers is increasing rapidly, which makes the thermal management even more challenging. IT equipment is typically designed to operate at a rack inlet air temperature ranging between 10 °C and 35 °C. The newest published environmental standards for operating IT equipment proposed by ASHARE specify a long term recommended dry bulb IT air inlet temperature range as 18°C to 27°C. In terms of the short term specification, the largest allowable inlet temperature range to operate at is between 5°C and 45°C. Failure in maintaining these specifications will lead to significantly detrimental impacts to the performance and reliability of these electronic devices. Thus, understanding the cooling system is of paramount importance for the design and operation of data centers. In this paper, a hybrid cooling system is numerically modeled and investigated. The numerical modeling is conducted using a commercial computational fluid dynamics (CFD) code. The hybrid cooling strategy is specified by mounting the in row cooling units between the server racks to assist the raised floor air cooling. The effect of several input variables, including rack heat load and heat density, rack air flow rate, in row cooling unit operating cooling fluid flow rate and temperature, in row coil effectiveness, centralized cooling unit supply air flow rate, non-uniformity in rack heat load, and raised floor height are studied parametrically. Their detailed effects on the rack inlet air temperatures and the in row cooler performance are presented. The modeling results and corresponding analyses are used to develop general installation and operation guidance for the in row cooler strategy of a data center.


Author(s):  
Septian Sony Hermawan ◽  
RD Rohmat Saedudin

CV Media Smart is a company that involved in the procurement of IT tools in schools and offices. With wide range coverage of schools and companies, CV Media Smart want to add more business process, therefore data center is needed to support existing and added later business process. The focus of this research is on cooling system and air flow. To support this research, NDLC (Network Development Life Cycle) is used as research method. NDLC is a method that depend on development process, like design of business process and infrastructure design. The reason why this research is using NDLC method is because NDLC is method that depend on development process. The standard that used in this research is TIA-942. Result of this research is a design of data center that already meet TIA-942 standard tier 1.


Energies ◽  
2020 ◽  
Vol 13 (2) ◽  
pp. 393 ◽  
Author(s):  
Heran Jing ◽  
Zhenhua Quan ◽  
Yaohua Zhao ◽  
Lincheng Wang ◽  
Ruyang Ren ◽  
...  

According to the temperature regulations and high energy consumption of air conditioning (AC) system in data centers (DCs), natural cold energy becomes the focus of energy saving in data center in winter and transition season. A new type of air–water heat exchanger (AWHE) for the indoor side of DCs was designed to use natural cold energy in order to reduce the power consumption of AC. The AWHE applied micro-heat pipe arrays (MHPAs) with serrated fins on its surface to enhance heat transfer. The performance of MHPA-AWHE for different inlet water temperatures, water and air flow rates was investigated, respectively. The results showed that the maximum efficiency of the heat exchanger was 81.4% by using the effectiveness number of transfer units (ε-NTU) method. When the max air flow rate was 3000 m3/h and the water inlet temperature was 5 °C, the maximum heat transfer rate was 9.29 kW. The maximum pressure drop of the air side and water side were 339.8 Pa and 8.86 kPa, respectively. The comprehensive evaluation index j/f1/2 of the MHPA-AWHE increased by 10.8% compared to the plate–fin heat exchanger with louvered fins. The energy saving characteristics of an example DCs in Beijing was analyzed, and when the air flow rate was 2500 m3/h and the number of MHPA-AWHE modules was five, the minimum payback period of the MHPA-AWHE system was 2.3 years, which was the shortest and the most economical recorded. The maximum comprehensive energy efficiency ratio (EER) of the system after the transformation was 21.8, the electric power reduced by 28.3% compared to the system before the transformation, and the control strategy was carried out. The comprehensive performance provides a reference for MHPA-AWHE application in data centers.


2015 ◽  
Vol 137 (4) ◽  
Author(s):  
Vaibhav K. Arghode ◽  
Yogendra Joshi

Presently, air cooling is the most common method of thermal management in data centers. In a data center, multiple servers are housed in a rack, and the racks are arranged in rows to allow cold air entry from the front (cold aisle) and hot air exit from the back (hot aisle), in what is referred as hot-aisle-cold-aisle (HACA) arrangement. If the racks are kept in an open room space, the differential pressure between the front and back of the rack is zero. However, this may not be true for some scenarios, such as, in the case of cold aisle containment, where the cold aisle is physically separated from the hot data center room space to minimize cold and hot air mixing. For an under-provisioned case (total supplied tile air flow rate < total rack air flow rate) the pressure in the cold aisle (front of the rack) will be lower than the data center room space (back of the rack). For this case, the rack air flow rate will be lower than the case without the containment. In this paper, we will present a methodology to measure the rack air flow rate sensitivity to differential pressure across the rack. Here, we use perforated covers at the back of the racks, which results in higher back pressure (and lower rack air flow rate) and the corresponding sensitivity of rack air flow rate to the differential pressure is obtained. The influence of variation and nonuniformity in the server fan speed is investigated, and it is observed that with consideration of fan laws, one can obtain results for different average fan speeds with reasonable accuracy. The measured sensitivity can be used to determine the rack air flow rate with variation in the cold aisle pressure, which can then be used as a boundary condition in computational fluid dynamics (CFD)/rapid models for data center air flow modeling. The measured sensitivity can also be used to determine the change in rack air flow rate with the use of different types of front/back perforated doors at the rack. Here, the rack air flow rate is measured using an array of thermal anemometers, pressure is measured using a micromanometer, and the fan speed is measured using an optical tachometer.


2017 ◽  
Vol 139 (1) ◽  
Author(s):  
Vaibhav K. Arghode ◽  
Taegyu Kang ◽  
Yogendra Joshi ◽  
Wally Phelps ◽  
Murray Michaels

In a raised floor data center, cold air from a pressurized subfloor plenum reaches the data center room space through perforated floor tiles. Presently, commercial tool “Flow Hood” is used to measure the tile air flow rate. Here, we will discuss the operating principle and the shortcomings of the commercial tool and introduce two other tile air flow rate measurement tools. The first tool has an array of thermal anemometers (named as “Anemometric Tool”), and the second tool uses the principle of temperature rise across a known heat load to measure the tile air flow rate (named as “Calorimetric Tool”). The performance of the tools is discussed for different types of tiles for a wide range of tile air flow rates. It is found that the proposed tools result in lower uncertainty and work better for high porosity tiles, as compared to the commercial tool.


Author(s):  
Roger Schmidt ◽  
Madhusudan Iyengar

The heat dissipated by large servers and switching equipment is reaching levels that make it very difficult to cool these systems in data centers or telecommunications rooms. Some of the highest powered systems are dissipating upwards of 4000 watts/ft2(43,000 watts/m2) based on the equipment footprint. When systems dissipate this amount of heat and then are clustered together within a data center significant cooling challenges can result. This paper describes the thermal profile of 3 data center layouts (2 are of the same data center but different points in time with a different layout). Detailed measurements of all three were taken: electronic equipment power usage; perforated floor tile airflow; cable cutout airflow; computer room air conditioning (CRAC) airflow, temperatures and power usage; electronic equipment inlet air temperatures. Although the detailed measurements were recorded this paper will focus at the macro level results of the data center to see if some patterns present themselves that might be helpful for future guidelines of data center layout for optimized cooling. Specifically, areas of the data center where racks have similar inlet air temperatures are examined relative to the rack and CRAC unit layout.


Author(s):  
Septian Sony Hermawan ◽  
Rd Rohmat Saedudin

CV Media Smart is a company that involved in the procurement of IT tools in schools and offices. With wide range coverage of schools and companies, CV Media Smart want to add more business process, therefore data center is needed to support existing and added later business process. The focus of this research is on cooling system and air flow. To support this research, NDLC (Network Development Life Cycle) is used as research method. NDLC is a method that depend on development process, like design of business process and infrastructure design. The reason why this research is using NDLC method is because NDLC is method that depend on development process. The standard that used in this research is TIA-942. Result of this research is a design of data center that already meet TIA-942 standard tier 1.


Author(s):  
Bharathkrishnan Muralidharan ◽  
Saurabh K. Shrivastava ◽  
Mahmoud Ibrahim ◽  
Sami A. Alkharabsheh ◽  
Bahgat G. Sammakia

The use of air containment systems has been a growing trend in the data center industry and is an important energy saving strategy for data center optimization. Cold Aisle Containment (CAC) is one of the most effective passive cooling solutions for high density heat load applications. Cold Aisle Containment provides a physical separation between the cold air and the hot exhaust air by enclosing the cold aisle, preventing hot air recirculation and cold air bypass. This separation provides uniform inlet air temperatures to the servers, which can further contribute to overall data center efficiency. This paper includes the thermal test data for a data center lab with and without a CAC set up. The paper quantifies the thermal impact of implementing a CAC system over an open Hot Aisle/Cold Aisle (HA/CA) arrangement for three different cabinet heat load conditions at two different CRAC (Computer Room Air Conditioner) return air set point conditions. It studies the advantages of CAC over standard HA/CA arrangement. A case study has been presented showing a cooling energy savings of 22% with the use of a CAC system over a standard HA/CA arrangement.


Author(s):  
Roger Schmidt ◽  
Madhusudan Iyengar ◽  
Joe Caricari

With the ever increasing heat dissipated by IT equipment housed in data centers it is becoming more important to project the changes that can occur in the data center as the newer higher powered hardware is installed. The computational fluid dynamics (CFD) software that is available has improved over the years and some CFD software specific to data center thermal analysis has been developed. This has improved the timeliness of providing some quick analysis of the effects of new hardware into the data center. But it is critically important that this software provide a good report to the user of the effects of adding this new hardware. And it is the purpose of this paper to examine a large cluster installation and compare the CFD analysis with environmental measurements obtained from the same site. This paper shows measurements and CFD analysis of high powered racks as high as 27 kW clustered such that heat fluxes in some regions of the data center exceeded 700 Watts/ft2 (7535 W/m2). This paper describes the thermal profile of a high performance computing cluster located in an IBM data center and a comparison of that cluster modeled with CFD software. The high performance Advanced Simulation and Computing (ASC) cluster, developed and manufactured by IBM, is code named ASC Purple. It is the World’s 3rd fastest supercomputer [1], operating at a peak performance of 77.8 TFlop/s. ASC Purple, which employs IBM pSeries p575, Model 9118, contains more than 12,000 processors, 50 terabytes of memory, and 2 petabytes of globally accessible disk space. The cluster was first tested in the IBM development lab in Poughkeepsie, NY and then shipped to Lawrence Livermore National Labs in Livermore, California where it was installed to support our national security mission. Detailed measurements were taken in both data centers of electronic equipment power usage, perforated floor tile airflow, cable cutout airflow, computer room air conditioning (CRAC) airflow, and electronic equipment inlet air temperatures and were report in Schmidt [2], but only the IBM Poughkeepsie results will be reported here along with a comparison to CFD modeling results. In some areas of the Poughkeepsie data center there were regions that did exceed the equipment inlet air temperature specifications by a significant amount. These areas will be highlighted and reasons given on why these areas failed to meet the criteria. The modeling results by region showed trends that compared somewhat favorably but some rack thermal profiles deviated quite significantly from measurements.


Sign in / Sign up

Export Citation Format

Share Document