Lost Password

News

Data Center
Product Insight

Check out our new fully NFPA compliant retractable roof system!

This game changing roof system ensures that the containment roof obstruction is fully removed electronically when a smoke detector is alarmed.

Additional benefits include the ability to wirelessly open the roof for maintenance above the containment, modular design allowing for increase or decrease of size, ease of deployment, and the sleek look of the roof is easy on the eyes. This is the ultimate containment roof system.

Polar Cap Retractable Roof Available Fall of 2014!

The patented Subzero Engineering Polar Cap is the first fully NFPA compliant containment roof system that attaches to the top of the racks and forms a ceiling that prevents hot and cold air from mixing.

Most data center containment systems rely on the heat generated from a fire related incident to release the containment system, as it can pose an obstacle to the fire suppression agent. The NFPA has determined that it is important to have a faster response time and more importantly, a testable system.

The updated Subzero Polar Cap retractable roof system is now a fully electric roof system that retracts into a metal housing when the fire suppression system is alarmed. Having a pre-action system that reacts to a smoke detector will ensure that the containment roof is fully retracted long before the fire suppression system is discharged. Additionally, the roof material is made with the highest fire resistant standard of ASTM E-84 Class A rating.

The Polar Cap can also be opened and closed when maintenance is required above the containment space.

The new roof system is fully customizable in both length (up to 30’) and width (up to 5’). The aluminum profile is less than 6” high and thus it presents no problem with obstructions above the cabinets.

Data Center
Product Insight

Mind the Gap – The importance of gap-free data center containment door systems

By Subzero Engineering CEO, Larry Mainers

We need a “Mind the gap” philosophy in airflow management, especially containment doors.

No visit to London England is complete without seeing the London Underground, or as it is affectionately called “The Tube”. The tube connects the greater part of London with 270 stations. It’s an easy and inexpensive way to get around London town.

In 1969 an audible and/or visual warning system was added to prevent passengers’ feet from getting stuck between the platform and the moving train. The phrase “Mind the gap” was coined and has become associated with the London Underground ever since.

“Mind the gap” had been imitated all over the world. You will find versions of it in France, Hong Kong, Singapore, New Delhi, Greece, Sweden, Seattle, Brazil, Portugal, New York, Sydney, Berlin, Madrid, Amsterdam, Buenos Aires, and Japan.

It is my hope that this phrase can be embraced by the data center industry. Gaps in airflow management and especially when containment is deployed are an easy way to lose both energy and overall cooling effectiveness. We need a “Mind the gap” philosophy in airflow management, especially containment doors. Why doors? Because of the door’s moving parts it is less expensive for manufacturers to leave a gap then to have a door that fully seals. Data center managers who “Mind the gap” should insist on door systems that don’t leak.

Just how important is it to “Mind the gap” in your data center containment system? One way to see the importance of eliminating gaps is to take a look around your own house. How many acceptable gaps do you have around your windows? Do you conclude that since the window is keeping most of the conditioned air inside that a few gaps around the windows will not matter much? I doubt it. The fact is, utility companies have been known to provide rebates for the use of weather stripping to ensure a ll gaps are filled. Gaps equal waste. Over time any waste, no matter how small, ends up being substantial.

Gaps become an even more important area to fill when you consider that most contained aisles are under positive pressure. Positive pressure can increase the leakage fourfold. A cold aisle that is oversupplied should move air through IT equipment and not aisle end doors.

It’s important that we all “Mind the gap” in our data center containment doors. In this way we both individually and collectively save an enormous amount of energy, just as the world’s mass transit systems, like ‘The Tube’, do for us each and every day.

Company
Events

Data Center World Spring 2014 Conference was a blast!

We had a great time at the Data Center World Spring 2014 Conference.

We met with some amazing people, showed off our new products, worked with a fantastic film crew, learned how to apply for and receive utility rebates with environmental monitoring from Larry Mainers, and enjoyed some of the fun that is Las Vegas.

It was such a great show that we have decided to go to the Orlando Conference – October 19-22. We hope to see you there!

Click here to see some the pictures from the conference on our Facebook page.

Data Center
Press Release

What’s new at Subzero Engineering for 2014

At Subzero Engineering we are always looking for new ways to improve our products, by making them more energy efficient, NFPA compliant, and adding more standard features. 

This year is no exception!

We have been working hard taking our world class products and ma king them even better. Here are a few of the changes we are making for 2014.

Product Announcements

• New Polar Cap Retractable Roof – The first fully NFPA compliant containment roof system
• New Arctic Enclosure Sizes Available – Two new 48U cabinets available
• Power Management – We now offer a full line of Raritan power products
• New Elite Series Doors – All of our doors have a sleek new design & come with extra features, standard
• New Panel Options – 3MM Acrylic, 4MM Polycarbonate, 3MM FM4910

Data Center
Educational Article

Airflow Management’s Role in Data Center Cooling Capacity

White Paper – Airflow Management’s Role in Data Center Cooling Capacity, by Larry Mainers

Airflow management (AFM) is changing the way data centers cool the IT thermal load.

In the simplest terms AFM is the science of separating the cooling supply from the hot return airflow.

AFM’s impact on cooling capacity is huge. This is because the traditional cooling scenario without the full separation of supply and return airflow requires as much as four times the cooling capacity to satisfy the same thermal load. This creates the unnecessary need for cooling units due to airflow inefficiency.

Data center managers can easily determine the percentage of inefficiency by counting the tons of available cooling capacity and measuring it against the IT thermal load measured in kW.

Summary

Airflow management (AFM) is changing the way data centers cool the IT thermal load. In the simplest terms AFM is the science of separating the cooling supply from the hot return airflow.

AFM’s impact on cooling capacity is huge. This is because the traditional cooling scenario without the full separation of supply and return airflow requires as much as four times the cooling capacity to satisfy the same thermal load. This creates the unnecessary need for cooling units due to airflow inefficiency.

Data center managers can easily determine the percentage of inefficiency by counting the tons of available cooling capacity and measuring it against the IT thermal load measured in kW.

For example, 40 tons of cooling capacity will cool 140.67 kW. Thus the average data center without AFM might have as much as 160 tons of cooling to mediate 140 kW.

It’s easy to conclude that if AFM can reduce the excessive cooling capacity, that operating cost could likewise be reduced.

There are three typical ways to reduce the energy use of cooling units:

1) Turning off cooling units
2) Reducing fan speeds
3) Increasing temperature and decreasing relative humidity set points.

“It is important to note that there is a fundamental difference between what is required to change the volume of airflow and the supply temperature/RH.” In order to understand this difference the engineers at Subzero Engineering coined the term UNIFLOW in 2005. ‘Uniflow’ describes air that moves in only one direction.

Data center airflow should flow from the cooling unit directly to the IT thermal load and back to the cooling unit intake. This unidirectional airflow should correspond to the volume of air that is required to carry the thermal load of the IT equipment.
Anything in excess of this airflow requires additional cooling energy to create the volume or CFM. Thus anytime you plug leaks in the UNIFLOW a reduction in fan speed could be made.

As you can imagine, the energy saved due to excessive volume has more to do with the amount of airflow leaks. It is here that some AFM companies have over-estimated potential energy saved. It is common to hear that if you plug your cable cutouts that you will save 30% of energy. That is only true if you have leakage that amounts to 30% of excessive volume. And, that this volume can be
adjusted.

Note: In some cases additional energy can be wasted when cold supply air bypasses the thermal load and returns to the cooling unit.
This is when the cooling unit is required to lower the RH of the return air.

The other part of the cooling efficiency equation is adjusting of the cooling units’ set points. This can only be accomplished when the intake temperature and RH across the face of the IT equipment (intake) is within the acceptable manufacturer’s range.

This can be likened to the ‘weakest link’ in a chain. Cooling set points are fixed to the warmest IT intake. The warmest IT intake is the weakest link in the chain.

Understanding these fundamentals help IT managers when they are presented with AFM solutions and the estimated energy savings.

How then can data center managers determine the true energy savings with each AFM best practice
solution? Which best practice solution delivers the most return on investment? What would the proper AFM road map look like?

Background

AFM is not new to data centers. Cable Cutout Covers (CCC) first introduced in the 1990’s, were the industry’s first experience with eliminating UNIFLOW leaks.

Another area addressed was the misplacement of perforated tiles. Previously, it was common to see perforated tiles placed between computer racks and the cooling units. This caused what is called a ‘short cycle’ where cooling air doesn’t pass through the thermal load before returning to the cooling unit’s return.

Today the industry fully embraces the need to properly place supply perforated tiles in the cold
aisle and eliminate leaks with cable cutout covers in the hot aisle.

Another common and longstanding AFM tool is the Blanking Panel (BP). Principally, the main purpose of the BP is to prevent the migration of hot return air from moving within the cabinet into the cold supply aisle. Additionally, air moving from the cold aisle into the hot aisle without passing through a thermal load is another form of leakage where volume must be made up with increased cubic feet per minute (CFM).

Still another AFM tool is the Velocity Adjustor (VA), invented by Subzero Engineering in 2006. The VA is used to balance subfloor air pressure ensuring a consistent ‘throw rate’ (CFM) to each perforated tile. It also prevents two rivers of airflow from creating a subfloor eddie that can generate a negative pressure that sucks ambient airflow into the subfloor void. This tool can be used to lower CFM or airflow volume, because it allows a consistent volume of air into the cold aisle.

Another AFM tool can be found in some IT cabinets. These panels are placed in the 6 common areas
around the cabinet that would allow hot exhaust airflow to migrate back into the cool supply aisle.
Like blanking panels, AFM within the cabinets plug air leakage and lowers the volume of air required.

The most recent innovation in AFM is ‘Containment’.

The term ‘Containment’ is used in two ways. First, it can describe the doors, walls, and ceilings that
‘contain’ airflow, secondly, it can be used to describe all the tools of AFM combined that creates UNIFLOW.

Containment, as it relates to doors, walls, and ceiling systems is the final piece of the puzzle for AFM. This is because consistent IT intake temperatures cannot be achieved by just plugging leaks in the Uniflow. Containment fundamentally changes AFM by managing the largest part of the UNIFLOW.

AFM Tools

  1. Cable Cutout Covers
  2. Perforated Tile Placement
  3. Blanking Panels
  4. Velocity Adjustors
  5. Rack or Cabinet AFM
  6. Containment

AFM Methods and Returns

While all AFM methods contribute to the separation of supply and return airflow, what actual reduction in energy occurs with each method?

For instance – CCC. What cooling unit energy adjustments are made when all of the cable cutouts
are covered? The most obvious benefit is the reduction in the volume of air required. Airflow
volume can be reduced by turning off cooling unit fans or by slowing fan speed with variable frequency drives (VFDs).

CCC’s plug leaks in the Uniflow but they cannot prevent the mixture of supply and return airflow.
Instead, CCC’s should be seen as a building block, toward full supply and return separation.

What about BP and cabinet airflow management?

These too are necessary components to plug leaks in the Uniflow. The amount of space open has a
direct correlation to the amount of air leaking that does not pass through the thermal load. Energy reduction is thus limited to the amount of leakage eliminated.

What about containment in the form of doors, walls, and roof systems?

Containment components represent the largest space in the UNIFLOW. For instance, the space at
the end of a four-foot-wide aisle can attribute to as many as 30 square feet of open space where airflow mixing occurs. Next, add the space between the top of the row of a four-foot aisle with 12 cabinets and you have an additional 96 square feet. If you combine the overhead space and the two ends of the row, it amounts to 156 square feet of open space in one 24-foot aisle alone.

If the rack row has no opposing cabinets or areas with missing cabinets, this square footage space
can easily double or triple. Clearly these spaces represent the bulk of cold and hot air mixing.
Which AFM contributes the most to energy efficiency?

The key to answering this question is found when we examine the individual energy use of the cooling units.

The two main energy components of the cooling units are fans that produce airflow volume and the
mechanisms (chiller compressors and pumps) that produce air temperature and relative humidity.

According to the US Dept. of Energy (Energy Star) cooling unit fans account for 5% to 10% of total
data center energy consumption. Also a study by Madhusudan Iyengar and Roger Schmidt of IBM
entitled “Energy Consumption of Information Technology Data Centers” concludes that cooling
compressors, pumps, etc. account for 25% to 30% of the total data center energy consumption.

From this we learn that more potential energy savings come from set point adjustment, than air volume adjustment.

And it is here where most of the confusion about AFM energy savings comes from.

Many data center managers were given the expectations of huge energy savings when they
deployed CCC and BP. Instead, the actual energy saved was a small percentage of the cooling
energy total. This is because the energy savings experienced were a combination of the amount of
leakage that was mitigated and the percentage of energy used in creating that volume.

In contrast, much larger energy reductions have been measured when DC managers contained either the hot or cold aisle. This is due to two reasons:
1) The open space or leakage is greater in the containment space.
2) The energy sources (cooling unit compressors, etc.) use more energy as a percentage than fans that create air volume.

A proof of this can be seen in the way utility companies provide rebates to data centers that reduce energy consumption. Utility companies that offer energy rebates require a true before and after energy consumption measurement in order to determine just how much energy is being saved. It is very rare for data centers that use CCC and installed BP to get such a rebate, as the energy reduction was not significant. This changed when data centers contained the hot or cold aisle. Data centers with a full AFM solution, incorporating containment, are commonly receiving thousands of dollars in rebates.

Does that mean that containment is the holy grail of AFM? Yes and no. While containment offers the biggest bang for the buck, the need to plug the holes in the Uniflow is still a major part of the overall airflow chain. The good news is that the energy saved when incorporating all aspects of AFM can more than pay for the cost, in a matter of 12 to 18 months.

That said, those with a limited budget will get more airflow separation with containment doors, walls, and ceilings than with BP and CCC.

Think of it this way… imagine a bathtub with several pencil-sized holes in it. You can imagine the water pouring from these holes. Now imagine the same tub with one of the sides missing. Which of the two ‘leaks’ would you patch up first?

When faced with airflow management solutions, remember that the largest energy cost is in controlling the temperature of the supply air. Next, know that the largest mixing space is at the aisle ends and top of the rack row. This then supports the road map of supplying a consistent and predictable cooling airflow to the cold aisle that can be adjusted according to the manufacturer’s intake specifications in order to save the most energy.

The good news is that most data centers have some level of cable cutout management and blanking panels. This then stresses the value of getting the bulk of the energy savings by completing AFM with full hot or cold aisle containment.

References

1) https://www.energystar.gov/index.cfm?c=power_mgt.datacenter_efficiency_vsds
2) Schmidt, R. and Iyengar, M., “Thermodynamics of Information Technology Data Centers

Data Center
Events

Data Center World Conference suggests our talk on Energy Rebates as one of the top 13 sessions to attend

Larry Mainers, CEO of Subzero Engineering will be giving a session explaining “How to Apply and Receive Utility Rebates with Environmental Monitoring Systems”

This case study will show how a California data center received a large rebate from PG&E after DCEP trained experts implemented a wireless monitoring system that provided key before and after data needed to verify the energy savings to the utility company.

Please join us and see how Subzero’s DCEP trained engineers can assist you through the entire energy rebate process.


Event Details

Theme:
“How to Apply and Receive Utility Rebates with Environmental Monitoring Systems”

Presented by:
Larry Mainers, CEO, Subzero Engineering

Date and Time:
Wednesday, April 30, 2014
Time: 10:30 – 11:30 am

Location:
FAC 3.1

Data Center
Success Story

How Containment and Environmental Monitoring Garnishes a Large Utility Rebate

Virtustream is a cloud innovator offering enterpriseclass cloud solutions for enterprises, governments and service providers.

Subzero Containment leads to energy savings and a $49,777.72 utility rebate

Overview

Virtustream, Inc. is a cloud innovator offering enterprise-class cloud solutions for enterprises, governments, and service providers. Virtustream simplifies moving complex IT to the cloud – whether private, public or hybrid – delivering the full economic and business benefits of the cloud and virtualization.

Virtustream relies heavily on the consistent and reliable operations of their data centers.

In the spring of 2013 Virtustream researched ways to decrease their operational cooling costs. Additionally, Virtustream was determined to benefit from the generous utility rebates provided by Pacific Gas and Electric Company – PG&E.

From a cooling perspective, they chose containment technology that separates the supply and return airflow since it was a tried and proven system that delivers huge cooling energy reduction.

Virtustream employed Subzero Engineering to manage the project due to their experience in four key areas:

1) Containment technology
2) Utility rebate programs
3) DCEP certified engineers
4) PolarXpress wireless environmental monitoring system.

Challenges

Increase cooling reliability while increasing the IT performance and getting the local utility company to share in the costs.

Step One – Working with the Utility Company

The first step was to meet with the utility company to outline the rebate requirements. Representatives from PG&E, Virtustream, and Subzero Engineering met at the data center to walk through the program. Subzero supplied two Department of Energy (DOE) Data Center Efficiency Practitioner (DCEP) certified engineers familiar with airflow audits and wireless environmental monitoring.

Step Two – Gather Existing Environmental Data

The second step was to install the PolarXpress wireless environmental monitoring system so that we could fully understand the current airflow patterns, temperature, and relative humidity of the intake at the IT equipment. Measurements were taken all across the face of the racks, as well as the return airflow back to the CRAC units. Thousands of data points were collected. The data revealed that the supply temperature was at its lowest setting but by the time the cooling air reached the middle of the rack the temperature had increased by over 6 degrees. Worse yet, temperatures at the highest U were off by over 10 degrees! Clearly the mixture of supply and return airflow required the cooling units to work overtime just to keep up with the thermal demand.

Step Three – Install the Containment System

Step three was to eliminate the mixing of supply and return airflow. Subzero Engineering installed aisle end doors and a roof system onto the cold aisle. Gaps were filled inside racks to prevent hot air from migrating into the cold aisle.

Step Four – Post Environmental Monitoring

The next step was to measure the results of the containment’s separation of hot and cold airflow. The data revealed that the supply temperature had dropped 11 degrees! Additionally, the temperature differential between the bottom of the rack and the top was only 1 degree.

Step Five – Harvesting the Energy Savings

Step five was to slowly increase the CRAC temperature set points to match the IT equipment supply air temperature. Instead of a loss in supply temperature, the CRAC set points were within a degree or two of the actual required intake temperature. This severely reduced the amount of refrigeration of air required to maintain thermal cooling of the IT equipment.

Final Step – Receiving the PG&E Rebate

The before and after data was then presented to PG&E for their internal auditing. As a result of the energy reductions that were proven to be sustainable by the PolarXpress environmental monitors, PG&E was able to award Virtustream a substantial energy rebate.

Solutions

  • Containment of supply and return airflow
  • DCEP trained engineers work with PG&E to gather required data
  • PolarXpress wireless environmental monitoring system to gather pre and post readings

Results

10 degree drop in supply airflow, which was converted into energy savings, with a 8 degree increase in temperature set points. Virtustream was awarded $49,777.72 utility rebate.

Conclusion

Containment of the supply and return airflow now provides the Virtustream data center with a consistent, reliable temperature at the IT intake. At the same time, it reduces the cost of cooling by reducing the temperature set points. Finally, the environmental monitoring system (PolarXpress) provided real-time data to support a large energy rebate from the local utility company. Virtustream is meeting the challenge of increasing the IT performance, while at the same time reducing its operational costs.

Data Center
Success Story

Host.net Cold Aisle Containment significantly impacts Boca Raton Colocation data center efficiency

Host.net is a multinational provider of managed infrastructure services focusing on Colocation, Cloud Computing, Connectivity and Continuity

Host.net announces the installation of a Cold Aisle Containment(CAC) System in their Boca Raton Colocation facility.

“The CAC project is the final stage of a year-long initiative to increase efficiency in our data centers and meets the growing demand for power density furthering our commitment to green data center technology” stated Lenny Chesal, Host.net’s CMO.

The initiative included adding blanking panels in client cabinets, additional CRAC units, and a substantial upgrade in commercial, UPS, and generator power. “We are ready for the future and have the ability to offer more power per cabinet than our competitors without requiring our clients to add space when they only need power” added Mr. Chesal. This installation is another milestone for Host.net as it continues to provide its clients with the latest cutting-edge technologies. “Other colocation providers have reported results of IT load per rack up to 3 times standard load without changing the environmental conditions and a significant drop in measured PUE” according to Host.net Director of Facilities, Daniel Calderon.

Host.net’s Cold Aisle Containment milestones are:

• Increase the facility set-point from 72 to 75 degrees Fahrenheit
• Maintain cold aisle temperatures
• Lower energy consumption
• Increase cooling capacity
• Lower IT equipment inlet temperatures

To achieve these milestones, Host.net evaluated multiple vendors and ultimately chose Subzero Engineering. Their cutting edge manufacturing facility enables them to create containment systems and products that have a superior fit and finish as well as incredible durability. Combined, its people design, manufacture and create intelligent containment environments. Additionally, Subzero Engineering is the proven leader with over 1,500 Cold Aisle Containment Systems deployed globally, a suite of enterprise monitoring tools, and a US based customer support team that stands behind every solution they deploy.


Host.net colocation solutions offer a “World-Class”, enterprise-level, safe and secure data center environment with redundant and robust network connections for companies to place mission-critical infrastructure including servers, storage devices, and VoIP switches. Host.net has multiple data centers available to protect data and applications as well as a hybrid solution of colocation and “cloud” services leveraging its proprietary 4cNxGn Smart Cloud Architecture™.

About Host.net

Host.net is a multinational provider of managed infrastructure services focusing on cloud computing and storage, colocation, connectivity and business continuity for enterprise organizations. The company operates multiple enterprise class data centers and geographically diverse cloud platforms connected to an extensive fiber optic backbone designed to connect their clients to their suite of managed services. It serves customers in most major metropolitan regions of North America as well as portions of Europe. Founded in 1996, Host.net is headquartered in Boca Raton, Florida.

Data Center
Success Story

Utility Rebates & Energy Effeciency

San Francisco

Virtustream receives energy rebate from Pacific Gas and Electric Company

Virtustream’s efforts to reduce cooling energy consumption paid off handsomely this past week as Pacific Gas and Electric Company awarded the enterprise class cloud solution provider a large utility rebate. Virtustream tasked Subzero Engineering to provide airflow management in the form of aisle based containment, rack enclosures, and a wireless environmental monitoring system called PolarXpress. This resulted in a lower supply temperature at the rack, which in turn lead to safely increasing CRAC temperature set points. During this time, Virtustream even added several new servers. The overall energy savings was 544,521.5 kWh!

“Ready for the Future”

Host.net a multinational provider of managed infrastructure services has just finished a final stage of a year long initiative to increase efficiency in their data centers. The Boca Raton based company is now able to offer more power per cabinet than competitors due to the recent installation of a cold aisle containment system from Subzero Engineering. Lenny Chesal, Host.net’s CMO, stated ” The CAC project is the final stage of a year-long initiative to increase efficiency in our data centers and meets the growing demand for power density furthering our commitment to green data center technology”. Subzero Engineering was chosen as the CAD provider due to their cutting edge manufacturing, superior fit and finish and incredible durability. Combined with the experience of installing over 1,500 systems world-wide another key value add is their US based customer support team that stands behind every solution they deploy.

Powered by Raritan

If losing track of power feeds is making you see red, it’s time to add more colors. Keep the power flowing through your data center with Raritan’s full-colored PDU solutions. Our intelligent rack PDUs come in eleven different colors that along with our SecureLock™ cords will help you to ensure there will never be a misplaced cable again.

Data Center
Product Insight

More Than Just Data… Intelligence

Product: Polar Express

Subzero has added RCI indicators that give an ‘at-a-glance’ view of the efficiency of your data center cooling program.

The RCI or Rack Cooling Index was created by Magnus Herrlin President of ANCIS Inc. of San Francisco California. This methodology is designed to measure how effectively IT equipment racks are cooled and maintained within ASHRAE thermal guidelines and standards. Subzero uses this tool to help our customers understand ways to lower the cost of cooling, all the while maintaining a predictable and reliable cooling solution. Now, in addition to being a low cost and easy to deploy environmental sensor system, the PolarXpress system can now be called Intelligent!

Complete the Containment – Rack IT Up

IT racks represent 60% of any containment solution. That’s why racks with gaps compromise the full separation of supply and return airflow. Subzero rack systems cover the gaps and ensure total air separation.