Lost Password

News

Data Center
Educational Article

The Role of CFDs in Containment

An interview with Gordon Johnson who is a certified data center design professional, Data Center Energy Practitioner (DCEP), CFD and electrical engineer regarding the use of CFD’s and containment.

An interview with a certified data center design professional regarding the use of CFD’s and containment

Data center airflow management engineers have used Computational Fluid Dynamics (CFD) programs for years to determine the complicated movement of airflow in data centers. CFD models pinpoint areas where airflow can be improved in order to provide a consistent cooling solution and energy savings.

Gordon, what is the principle way CFD’s are used with regard to containment?

We use CFD’s to determine two basic data sets. The first is the baseline, or the current airflow pattern. This initial CFD model shows supply intake temperatures to each cabinet. This model also determines the effectiveness of each AC unit as it relates to airflow volume, return air temperature, delta T, and supply air temperature.

The second model is the proposed design of the CFD engineer who uses the information from the base model to enact airflow management best practices to separate supply from return airflow. Typically several models are created in order to adjust airflow volume, set point temperatures, and adjust individual aisle supply volume.

Gordon, Are there situations in which the CFD engineer does not recommend containment?

Not really, because the entire basis of airflow management is the full separation of supply and return airflow. Anytime these two airflows mix there is a loss of energy and consistent supply temperature to the IT thermal load.

We have seen CFD’s used by manufactures to prove product effectiveness. What are some ways CFD’s are made to exaggerate product effectiveness?

Exaggerations usually stem from the principle known as GIGO, short for Garbage In, Garbage Out. This refers to the fact that computers operate by logical processes, and thus will unquestioningly process unintended, even nonsensical input data (garbage in) and produce undesired, often nonsensical output (garbage out).

Let me give you an example. Recently I recreated a CFD model that was used to explain the effectiveness of airflow deflectors. The purpose of the CFD was to show the energy savings difference between airflow deflectors and full containment. We found that certain key data points were inserted into the models that do not reflect industry standards. Key settings were adjusted to fully optimize energy savings without regard to potential changes to the environment. Any potentially adverse effects to the cooling system’s ability to maintain acceptable thermal parameters, due to environmental changes, are not revealed in the CFD model. Thus, the model was operating on a fine line that could not be adjusted without a significant impact on its ability to cool the IT load.

Can you give us any specifics?

The airflow volume was manually changed from 1 kW at 154 CFM to 1 kW at 120 CFM. Industry standard airflow is 154 CFM. The formula most commonly used is as such:

120 CFM airflow does not give the cooling system any margin for potential changes to the environment.

Another key area of unrealistic design is the placement of cabinet thermal load and high volume grates. The base model places high kW loads in specific, isolated areas surrounded by high volume grates. What then happens, if additional load is placed in areas of low volume airflow? Any changes to the rack kW in areas without high volume grates could not be accounted for. At the end of the day, any changes to the IT load would require an additional airflow management audit to determine what changes would affect the cooling solution. Thus, the proposed model is unrealistic because no data center would propose a cooling solution that would require regular modifications.

Are you recommending a CFD study every time you make changes to the data center thermal load?

No. a full separation supply and return airflow eliminates the guesswork with regards to the effect of air mixture. It also eliminates the need of specific high volume perforated tiles or grates to be placed in front of high kW loads. Instead, a CFD model would incorporate expected increases to the aisle thermal load. This falls in line with the “plus 1” kind of approach to cooling. Creating a positive pressure of supply air has many additional benefits, such as lowering IT equipment fan speed, and ensuring consistent supply temperature across the face of the IT intake.

Data centers should not be operated with little margin for changes or adjustments to the thermal load. That is why I always recommend a full containment solution with as close to 0% leakage as possible.  This is always the most efficient way to run a data center, and always yields the best return on investment. The full containment solution, with no openings at the aisle-end doors or above the cabinets, will easily allow the contained cold aisles to operate with a slightly greater supply of air than is demanded.  This in turn ensures that the cabinets in the fully contained aisle have a minimum temperature change from the bottom to the top of the rack, which allows the data center operator to easily choose predictable and reliable supply temperature set points for the cooling units.  The result?  Large energy savings, lower mean time between failures, and a more reliable data center.

What do you recommend as to the use of CFD studies and containment?

It’s important to create both an accurate baseline and a sustainable cooling solution design. This model will give data center operators a basis for an accurate representation of how things are being cooled. The proposed cooling solution can be used in numerous ways:

  • Accurate energy savings
  • Safe set point standards
  • Future cabinet population predictions
  • The ability to cool future kW increases
  • Identify and eliminate potential hot spots

Subzero Engineering endorses accurate and realistic CFD modeling that considers real world situations in order to create real world solutions.

Data Center
Educational Article

The Truth About Fans in the Data Center

And how this influences data center airflow management.

Sorry sports fans… this is not about your favorite team. Instead we are going to explore the fascinating world of mechanical fans.

How many times have you seen vender illustrations of fans pushing air in long blue lines from perforated raised floor tiles into the intake of a rack? The truth is that air does not move in such a way.  Calculating the airflow induced by one particular fan at any given distance away from the fan, about any point of the fans face is a very involved set of calculations.

Traditional thermal designs for fans were originally measured as jet velocity of water jets. This presented a close estimate, but inaccurate data. A recent study in 2012 helped in creating very accurate formulas as to fan outlet velocity and distributions. (— Fan Outlet Velocity Distributions and Calculations Eli Gurevich, Michael Likov (Intel Corporation, Israel Design Center, Haifa, Israel) David Greenblatt, Yevgeni Furman, Iliya Romm (Technion Institute of Technology, Haifa, Israel)

Generally, volumetric flow rate and distance traveled decreases when contained air enters ambient room air, and this is why mechanical air contractors use ductwork or a contained plenum to direct supply air to the thermal load. Increasing the velocity of air in order to reach the thermal load, instead of using a duct system, is considered inefficient.

It’s important to understand the relationship of mechanical air movement from fans and what actually happens to the airflow. The issue with fans is the manufacturer’s stated CFM capacity, and the distance of air movement that the fan is capable of will carry it. This value reflects what the fan is able to produce in a given test environment. Manufacturer stated air displacement (CFM) is based on what is called normal temperature and pressure conditions (NTP). 

 The actual volume of air that a fan can displace varies due to two factors:

1) Air density (hot, low density or cold, high density)
2) Air pressure (positive or negative)

Thus it is important to determine the manufacturer’s test conditions for the fan, and then compare the data to the actual planned environment in which the fan will operate.

For example, when considering the installation of a fan in the subfloor to move subfloor air into the cold aisle, the first question that should be addressed is: “what is the temperature of the air and head pressure that the fan will operate in?”

Why? The temperature of the air will determine its density when confined to a constant volume. In most cases, the subfloor air is denser, which is good.  Thus the more important question will be about the subfloor pressure. It is not unusual to have negative pressure areas in the subfloor due to high velocity air steams. The Bernoulli principle explains our concern, in that an increase of air speed will result in a decrease of air pressure. Additionally, when two air streams of high velocity air intersect from opposing directions, the result is often a subfloor vortex, resulting in the reversal of current.

So what’s the point? Imagine putting a raised floor fan system over an area with negative pressure. This would negatively affect the fan’s ideal operating conditions.

Consider this, what is the typical reason for using additional fans to move air into the cold aisle? Most likely the unassisted perforated tile or grate is not able to deliver sufficient airflow to the thermal load of the racks. What if this is based on inadequate subfloor pressure? If that is the case, adding a fan assisted raised floor panel will require taking into consideration the fan NTP. Also it will can drastically and unpredictably impact other areas of the data center as you “rob Peter to pay Paul” so to speak.

Consider the following subfloor airflow management strategies:

1) Eliminate high velocity air: This will ensure a more balanced delivery of air due to a nominalized subfloor pressure.
2) Cold Aisle Containment: Instead of designing rack cooling by placing an airflow-producing raised floor tile at the feet of each rack, why not create a cold aisle that is not dependent on perforated tile placement?

Cold aisle containment creates a small room of supply air that can be accessed by all IT equipment fans. Instead of managing each supply raised floor tile, the only requirement is ensuring positive air pressure in the aisle. Cold aisle containment systems provide several benefits: most contained cold aisles will only have a one-degree differential from the bottom to the top of the rack, and the cold aisle containment does not require high air velocity, which can create other airflow management problems, such as bypassing IT equipment intake.

Understanding the NTP conditions of IT equipment cooling fans is an important aspect of data center airflow management. For example, in order to properly adjust CRAC unit set points, it is important to know the temperature at which the supply air’s density will drop below each fan’s NTP conditions.  It is possible to lower the supply temperature to a level at which an increase in fan speed would be required to make up for the less dense airflow, potentially offsetting any energy savings from a higher cooling set point.

Simply adding fans to cool IT equipment is not a quick fix; it is imperative to first understand why sufficient airflow is not available. It is important to understand the fan’s NTP in the proposed environment, and to see if you can supply IT equipment with consistent airflow by simply separating supply and return air through data center containment. Containment can prevent the unnecessary use of additional electricity that is required to operate fans, saving money and electricity in the long term.

Data Center
Educational Article

The Truth Behind Data Center Airflow Management: It’s Complicated

What do principles of air movement have to do with data center airflow management?

Does hot air rise? The answer of course is “yes”.

Does hot air fall? The answer is yes again.

What about sideways? Yes!

Heat can move up, down, or sideways, depending on the situation. The idea that hot air has an inherent desire to flow up is a misconception that we in the data center airflow management business would like to see dissipate.

Temperature difference is the major factor with regards to the direction and rate of heat transfer. Because air tends to move towards thermal equilibrium, it is important to maintain physical separation of hot and cold air in data centers; the need for hot and cold air separation was the reason that the data center containment industry came into existence. The laws of thermodynamics state that air moves from areas of higher temperature towards areas of lower temperature. Air is a fluid that accounts for both density and buoyancy. When air is heated the molecules move around faster, which causes it to expand, and as it expands its density becomes lower. The warmer, lower density air will rise above the denser, cooler air.

Pressure is another determining factor when looking at air movement. The flow of air from areas of high pressure to areas of low pressure is an embodiment of Newton’s third law. Equilibrium is what also drives movement between areas of differing pressure, so uninhibited air will continuously move from high to low pressure until equilibrium is reached. This movement towards equilibrium is also known as expansion.

Principles of air movement:

1) Heat Transfer:
a. Conduction: Air flows from a higher temperature region to a lower temperature between mediums that make physical contact.
b. Convection: Heat transfer due to the movement of a fluid; can be free/natural, or forced.
2) Air flows from a higher pressure to a lower pressure


What does this have to do with data center airflow management?

The data center containment industry has been inundated with graphs depicting airflow, most of which show large, sweeping lines indicating the flow of air. In most cases, the airflow depicted is a result of a mechanical device, usually a fan. The data presented by these graphs tends to lead one to believe that mechanically induced airflow will sufficiently separate hot exhaust air from cold intake air. In real-world scenarios, air curtains are inefficient and ineffective.

Modern mechanical air conditioning systems rely on four sided duct systems to deliver supply air to the source of the heat load, and the return is moved by the same means. This is the only way to ensure the separation of supply and return airflow. Systems administrators and building managers should be dubious of airflow management systems that require an increase in energy to accomplish air separation. Instead, it is best to apply the simplest principles of airflow when designing a system aimed at full separation of supply and return airflow.

Data Center
Educational Article

Extending the Capacity of the Data Center Using Hot or Cold Aisle Containment

What correlation does consistent supply air across the face of the rack have to do with increased data center capacity?

Hot or Cold Aisle Containment can significantly increase the capacity of a data center when all U’s are fully populated due to consistent intake temperatures across the rack face.

Additionally, when cooling energy can be converted to IT equipment this too can elongate the life of a data center that is running out of power.

Problem Statement – Air Stratification
Most data centers without containment have air stratification. Air stratification occurs when supply and return air mix. This creates several temperature layers along the intake face of the rack. It is not uncommon for the temperature at the bottom of the rack to be 8 to 10 degrees colder than the top. As a result, many data centers have implemented policies that do not allow the top 6 to 8 U’s to be populated. This can decrease the data centers IT equipment capacity by 16%. Capacity from a space perspective is one thing, but when the unpopulated U’s are potentially high density systems the lost space is amplified.

Data Center
Success Story

Datacenter Revamps Cut Energy Costs At CenturyLink

by Timothy Prickett Morgan — EnterpriseTech Datacenter Edition

Subzero Engineering’s Containment Solutions contributed to Century Links hefty return on investment

It is probably telling that these days datacenter managers think of the infrastructure under their care more in terms of the juice it burns and not by counting the server, storage, and switch boxes that consume that electricity and exhale heat. Ultimately, that power draw is the limiting factor in the scalability of the datacenter and using that power efficiently can boost processing and storage capacity and also drop profits straight to the bottom line.

Three years ago, just as it was buying public cloud computing provider Savvis for $2.5 billion, CenturyLink took a hard look at its annual electric bill, which was running at $80 million a year across its 48 datacenters. At the time, CenturyLink had just finished acquiring Qwest Communications, giving it a strong position in the voice and data services for enterprises and making it the third largest telecommunications company in the United States. CenturyLink, which is based in Monroe, Louisiana, also provides Internet service to consumers and operates the PrimeTV and DirectTV services; it has 47,000 employees and generated $18.1 billion in revenues in 2013.

One of the reasons why CenturyLink has been able to now expand to 57 datacenters – it just opened up its Toronto TR3 facility on September 8 – comprising close to 2.6 million square feet of datacenter floor space is that it started tackling the power and cooling issues three years ago.

The facilities come in various shapes and sizes, explains Joel Stone, vice president of global data center operations for the CenturyLink Technology Solutions division. Some are as small as 10,000 square feet, others are more than ten times that size. Two of its largest facilities are located in Dallas, Texas, weighing in at 110,000 and 153,700 square feet and both rated at 12 megawatts. The typical facility consumes on the order of 5 megawatts. CenturyLink uses some of that datacenter capacity to service its own telecommunications and computing needs, but a big chunk of that power goes into its hosting and cloud businesses which in turn provide homes for the infrastructure of companies from every industry and region. CenturyLink’s biggest customers come from the financial services, healthcare, online games, and cloud businesses, Stone tells EnterpriseTech. Some of these customers have only one or two racks of capacity, whole others contract for anywhere from 5 megawatts to 7 megawatts of capacity. Stone’s guess is that all told, the datacenters have hundreds of thousands of servers, but again, that is not how CenturyLink, or indeed any datacenter facility provider, is thinking about it. What goes in the rack is the customers’ business, not CenturyLink’s.

“We are loading up these facilities and trying to drive our capacity utilization upwards,” says Stone. And the industry as a whole does not do a very good job of this. Stone cites statistics from the Uptime Institute, which surveyed colocation facilities, wholesale datacenter suppliers, and enterprises actually use only around 50 percent of the power that comes into the facilities. “We are trying to figure out how we can get datacenters packed more densely. Space is usually the cheapest part of the datacenter, but the power infrastructure and the cooling mechanicals are where the costs reside unless you are situated in Manhattan where space is such a premium. We are trying to drive our watts per square foot higher.”

While server infrastructure is getting more powerful in terms of core counts and throughput and storage is getting denser and, in the case of flash-based or hybrid flash-disk arrays, are getting faster, the workloads are growing faster and therefore the overall power consumption for the infrastructure as a whole in the datacenter continues to grow.

“People walk into datacenters and they have this idea that they should be cold – but they really shouldn’t be,” says Stone. “Servers operate optimally in the range of 77 to 79 degrees Fahrenheit. If you get much hotter than that, then the server fans have to kick on or you might have to move more water or chilled air. The idea is to get things optimized. You want to push as little air and flow as little water as possible. But there is no magic bullet that will solve this problem.”

Companies have to do a few things at the same time to try to get into that optimal temperature zone, and CenturyLink was shooting for around 75 degrees at the server inlet compared to 68 degrees in the initial test in the server racks at a 65,0000 square foot datacenter in Los Angeles. Here’s a rule of thumb: For every degree Fahrenheit that the server inlet temperature was raised in the datacenter, it cut the power bill by 2 percent. You can’t push it too far, of course, or you will start impacting the reliability of the server equipment. (The supplied air temperature in this facility was 55 degrees and the server inlet temperature was 67 degrees before the energy efficiency efforts got under way.)

The first thing is to control the airflow in the datacenter better, and the second is to measure the temperature of the air more accurately at the server so cooling can be maintained in a more balanced way across the facility.

CenturyLink started work on hot aisle and cold aisle containment in its facilities three and a half years ago, and the idea is simple enough: keep the hot air coming from the back of the racks from mixing with the cold air coming into the datacenter from chillers. The containment project is a multi-year, multi-million dollar effort, and CenturyLink is working with a company called SubZero Engineering to add containment to its aisles. About 95 percent of its facilities now have some form of air containment, and most of them are doing hot aisle containment.

“If we can isolate the hot aisles, that gives us a little more ride through from the cold aisles if we were to have some sort of event,” Stone explains. But CenturyLink does have some facilities that, just by the nature of their design, do cold aisle containment instead. (That has the funny effect of making the datacenter feel hotter because people walk around the hot aisles instead of the cold ones and sometimes gives the impression that these are more efficient. But both approaches improve efficiency.) The important thing about the SubZero containment add-ons to rows of racks, says Stone, is that they are reusable and reconfigurable, so as customers come and go in the CenturyLink datacenters they can adjust the containment.

Once the air is contained, then you can dispense cold air and suck out hot air on a per-row basis and fine-tune the distribution of air around the datacenter. But to do that, you need to get sensors closer the racks. Several years ago, it was standard to have temperature sensors mounted on the ceiling, walls, or columns of datacenters, but more recently, after starting its aisle containment efforts, CenturyLink tapped RF Code to add its wireless sensor tags to the air inlets on IT racks to measure their temperature precisely rather than using an average of the ambient air temperature from the wall and ceiling sensors. This temperature data is now fed back into its building management system, which comes from Automated Logic Control, a division of the United Technologies conglomerate. (Stone said that Eaton and Schneider Electric also have very good building management systems, by the way.)

The energy efficiency effort doesn’t stop here. CenturyLink is not looking at retrofitting its CRAC and CRAH units – those are short for Computer Room Air Conditioner and Computer Room Air Handler – with variable speed drives. Up until recently, CRAC and CRAH units were basically on or off, but now they can provide different levels of cooling. Stone says that running a larger number of CRAH units at a lower speeds provides better static air pressure in the datacenter and uses less energy than having a small number of larger units running faster. (In the latter case, extra cooling capacity is provided through extra units, and in the former it is provided by ramping up the speed of the CRAH units rather than increasing their number.) CenturyLink is also looking at variable speed pumps and replacing cooling towers fans in some facilities.

“We are taking a pragmatic, planned approach across our entire footprint, and we have gone into the areas where we are paying the most for power or have the highest datacenter loads and tackling those facilities first,” says Stone. The energy efficiency efforts in the CenturyLink datacenters have to have a 24 month ROI for them to proceed.

In its Chicago CH2 datacenter (one of three around that Midwestern metropolis and one of the largest run by CenturyLink in its fleet of facilities), it did aisle containment, RF Code sensors, variable speed CRAC units, variable speed drives on the pumps, and replaced the cooling tower fans with more aerodynamic units that ran slower and yet pulled the more air through the cooling towers. This facility, which is located out near O’Hare International Airport, has 163,747 square feet of datacenter space, has a total capacity of 17.6 megawatts, and can deliver 150 watts per square foot.

CenturyLink reduced the load in that CH2 facility by 7.4 million kilowatt-hours per year, and Stone just last month collected on a $534,000 rebate check from Commonwealth Edison, the power company in the Chicago area. All of these upgrades in the CH2 facility cost roughly $2.4 million, and with the rebates and the power savings the return on investment was on the order of 21 months – and that is before the rebate was factored in.

Data Center
Product Insight

Don’t cage your computer!

Subzero Engineering is partnering with Colo providers in creating cageless solutions for their customers.

Here’s what we are doing. We have combined Subzero’s aisle end doors, with auto close and locking features, along with airflow management cabinets that safely lock each cabinet to create a safe Colo environment that does not require cages.

• Locking Aisle End Doors
• Locking Cabinets
• Auto Close
• Complete Airflow Separation

Advances in containment and cabinets have created a fully secure colo environment without traditional wired cages. Instead secure aisle end doors, retractable roofs, and biometric locks create an easy to deploy, secure space for IT equipment.

A typical deployment includes:

• Locks can range from simple keyed, to biometric
• Auto closing doors that prevent accidental access
• Locking aisle end doors
• Locking cabinets
• Retractable roof system

Data Center
Educational Article

California Title 24 – It’s a Hole in One!

About Title 24

On July 1, 2014 California’s new energy efficiency standards went into effect. Title 24 will require, among other things 1) prohibiting reheat in computer rooms and 2) containment in large, high-density data centers with air-cooled computers. In order to prevent hot air from mixing with air that’s been mechanically chilled, data centers will need to modify their existing facilities to provide hot and cold aisle containment.

Why is Title 24 a good thing for data centers everywhere?

Data centers worldwide can benefit from the research done by the State of California. For instance, California determined that a 20,000 sq. ft. data center with a load of 100 Watts per sq. ft. could save up to a whopping $10,500,000 per year on energy expenses by implementing four energy efficient strategies. Imagine the potential savings when a nationwide effort is made?

State Requirements Vs. Corporate Initiative

No doubt state requirements are a great way to get companies to comply with new efficiency standards. That said, most states don’t have the requirements that California has. Should this cause corporations to lower their green initiatives? Of course not! Containment is an easy way to save money and make a contribution to lowering their carbon footprint. Hundreds of companies have installed containment systems, saved money, and increased the reliability of their cooling solution. Why not your company?

What’s next?

While many data centers have an ‘area’ of containment, the real energy savings only comes when all of the cooling air is separated from supply, to return – site wide. This requires a data center wide containment solution. Check out the ways Subzero Engineering has addressed the many aspects of data center wide containment.

Join California and dozens of other companies who have made a commitment to a site-wide containment solution.

Data Center
Success Story

NYI Rolls Out New Cold Aisle Containment System Within Data Centers

New York, NY

Deployment of Cold Aisle Containment Technology Reduces Energy Usage and Optimizes Equipment Performance

a New York company specializing in customized technology infrastructure solutions, announces today the deployment of Cold Aisle technology to its US data center facilities. As part of an initiative to implement the latest energy efficiency technologies, NYI is working closely with The New York State Energy Research and Development Authority (NYSERDA), sharing the state agency’s mission of exploring innovative energy solutions in ways that improve New York’s economy and environment. NYI’s CAC deployment is made possible through its partner, Subzero Engineering, a designer, manufacturer, and installer of custom, intelligent containment systems and products for data centers, worldwide.

Data Center Cold Aisle Containment fully separates the cold supply airflow from the hot equipment exhaust air. This simple separation creates a uniform and predictable supply temperature to the intake of IT equipment and a warmer, drier return air to the AC coil. Hot aisle and cold aisle containment are primary ways today’s leading businesses, like NYI, help reduce the use of energy and optimize equipment performance within their data centers.

“By adopting Cold Aisle Containment, NYI is increasing air efficiencies within its facilities, thereby translating to increased uptimes, longer hardware life and valuable cost and energy savings for NYI customers,” comments Lloyd Mainers, Engineer for Subzero Engineering. “Through efficiency, CAC also allows for the availability of additional capacity and increased load density, paving the way for higher density customer deployments.”

“When it comes to data center capacity, NYI is constantly monitoring our power density levels to ensure that we are spreading the capacity throughout our data centers most efficiently and decreasing our effect on the environment,” adds Mark Ward, Director of Business Development of NYI. “Cold Aisle Containment helps us to attain that level of efficiency, and not to mention, there are several government and cash incentives for incorporating it into our facilities. Above all, our customers benefit in that their equipment is cooled more effectively, reducing strain on the equipment’s own cooling mechanism and extending the lifespan of their servers.”

NYI Cold Aisle Containment Benefits include:

• Predictable, reliable, and consistent temperature to IT equipment at any point inside the aisle
• One (1) degree temperature difference from top to bottom
• Double or triple kW per rack
• Reduced white space requirements through optimized server racks
• Average of 30% energy cost savings
• Consistent, acceptable supply to IT intake
• Leaves more power available for IT equipment; increased equipment uptime
• Longer hardware life
• Increased fault tolerance (i.e. HVAC units that were required to achieve certain temperature goals are now redundant.)
• US Department of Energy recommended

Company
Press Release

Lights, Camera, Action!

Get the popcorn out, it’s time to watch some videos!

Subzero Engineering presents 12 new videos. Learn more about: NFPA compliant containment, new cageless containment bundles, our new auto closer with soft close features, hear from attendees at the Data Center World in Las Vegas, and much more.

Videos are a great way to see product, learn about product features and benefits from the people who created them, see product form and function, and hear from industry experts about their thoughts on product value.

Look for Bernard the bear photo bombs!

Data Center
Product Insight

Check out our new fully NFPA compliant retractable roof system!

This game changing roof system ensures that the containment roof obstruction is fully removed electronically when a smoke detector is alarmed.

Additional benefits include the ability to wirelessly open the roof for maintenance above the containment, modular design allowing for increase or decrease of size, ease of deployment, and the sleek look of the roof is easy on the eyes. This is the ultimate containment roof system.

Polar Cap Retractable Roof Available Fall of 2014!

The patented Subzero Engineering Polar Cap is the first fully NFPA compliant containment roof system that attaches to the top of the racks and forms a ceiling that prevents hot and cold air from mixing.

Most data center containment systems rely on the heat generated from a fire related incident to release the containment system, as it can pose an obstacle to the fire suppression agent. The NFPA has determined that it is important to have a faster response time and more importantly, a testable system.

The updated Subzero Polar Cap retractable roof system is now a fully electric roof system that retracts into a metal housing when the fire suppression system is alarmed. Having a pre-action system that reacts to a smoke detector will ensure that the containment roof is fully retracted long before the fire suppression system is discharged. Additionally, the roof material is made with the highest fire resistant standard of ASTM E-84 Class A rating.

The Polar Cap can also be opened and closed when maintenance is required above the containment space.

The new roof system is fully customizable in both length (up to 30’) and width (up to 5’). The aluminum profile is less than 6” high and thus it presents no problem with obstructions above the cabinets.