Having been in the IT world for over a decade now I have learned many things. Technology changes, needs change, requirements change, but one of the constants has been that IT is a resource hog on just about every level. You need lots of people, lots of time, lots of money, and lots of energy to keep even a modest IT Infrastructure running.
With the current rush for energy efficiency a lot of focus has been placed on data centers and the energy they consume, and a major push has started to make the Data Center consume less. I’ve been hearing “Green Data Center” lately. I was very interested when we got an invitation to tour a local hosting provider’s new facility that they were building.
We first got the tour of their existing facility and the efforts they have made there, and we then got a tour of the new building and all the advancements that they had built in to increase efficiency.
The existing facility was built many years ago as a small server room that was initially intended to be in house gear. As time went on customers asked to house their server there and the data center was born. It did not have raised floor, and as such all the HVAC was exposed with all the hot and cold air co-mingled. Over the last year they took on a “Hot Aisle Isolation” project which they say has increased the efficiency of the HVAC. This project consists of adding Plexiglas walls between the racks that separate the aisles into zones. In this way you can flood the front side of all your racks with just cold air, and have just the hot air at the rear of the racks. This allows you to have the HVAC return only handling hot air which makes the system a whole lot more effective and increases efficiency.
The new data center takes the HVAC, and many other systems, to a new level. The hot and cold air are not only separated by aisle, they are isolated by rack. They have installed a type of rack that is perforated on the front, and solid on the back. Cool air is pulled through the server, reducing strain on CPU fans, and hot air is pulled upward and out the top. It is then run out of the rack via a ‘chimney’ that connects into a hot air return that feeds into the HVAC system. This venting system allows them to isolate the hot to the ducted system, and then flood the room with the cool air.
The design of the HVAC system is different in a couple other ways as well. Instead of a traditional refrigeration system that most of us think of, they have employed an evaporative cooling system or as many know it, a ‘swamp’ cooler. This system uses water vapor to cool the air. It does not achieve the level of cooling that typical refrigerant style HVAC that is most common, but it does a great job with much lower energy consumption. There have been studies that indicate there may not be as great a need to cool servers as once thought, and it may make sense to reap the energy savings instead.
This type of cooling does require a good deal of water to run, and in this case they built a rainwater harvesting system and 25,000 gallon storage tank. Being in the Pacific Northwest we have ample rain, and they expect the bulk of their water requirements to be supplied in this way. In the event that there is a stretch of hot, dry weather they are hooked up to the city water supply, but they don’t expect this to happen but a couple of times a year.
Another way that this type of system is different than a traditional HVAC is all but the duct work is on the roof of the building which farther increases space utilization on the data center floor.
One thing that I noticed when entering the new center is that there is still no raised floor. This has always bothered me about their original facility, and I asked why they had planned a new facility that way. They indicated that raised floor is not always the best route. First, it requires a good deal of build out which can be costly. You must build the flooring itself, as well as ramps at each entry. The ramps also require additional space that you save by not having raised floor. An unexpected reason that they also gave was that with raised floor you can get turbulence under the floor when cabling and wires start to get introduced that can reduce the efficiency of the blowers.
Another benefit for not having raised floor is that as server density goes up you need more airflow moving through the room, and as such you need a taller and taller raised floor to compensate. Without the raised floor, and with the chimney style racks for hot air return, they can cool the open spaces with less effort. This will also reduce temperature variations vertically within the room, meaning that servers at the top of the rack should get equally as much cooling as servers at the bottom of the rack. This can be hard to achieve in a raised floor system.
Ordinarily, I’m used to traditional battery UPS systems, but in this new building was a Flywheel UPS. These flywheel systems spin up and store energy during normal operation. When the utility power goes out, the flywheel generates electricity while the generators spin up. They can handle about 25 seconds at normal load. When questioned why they went that route, they explained that if your generators do not come up when needed what difference does it make if you get 30 seconds or 15 minutes? “You are going down regardless.” Personally I’d like the extra 10m to shut down my most critical servers before the power is cut, but I can appreciate the idea.
Other benefits of flywheel systems is that they require about 1/3 the space of a traditional UPS, and the lifetime of a flywheel is about 20-25 years relatively maintenance free. A traditional UPS requires new batteries every 3-5 years greatly increasing the cost over time.
They have also partnered with the local power company to have delivered 100% clean power from renewable sources such as wind power. This is an attractive point for many companies in the region.
In the world of data centers is Power Usage Effectiveness or PUE is an important measurement. It is the ratio of energy that gets consumed by the facilities such as HVAC, UPS, lighting, etc vs computing equipment. The ideal data center has a rating of 1 with the worst rating being a 3. For this new facility they estimate their PUE to be approximately 1.1 meaning it is highly efficient facility.
I was very impressed with all the efforts that have made to build a responsible, efficient, and reliable data center. The process has been clearly well thought out and every detail considered. My company does not currently have any space with this provider, we are with another Tier 3 data center across town, but if we were to be looking now it would be a very hard decision to say the least. Our vendor is also building a new facility and have promised to give me a tour when they are farther along with construction. They have indicated they are incorporating many new technologies, so it will be interesting to see their approach to building a Green Data Center.