Monday, 25 April 2011

Facebook's Green Data Center

In my roles as CIO at Harvard Medical School and Beth Israel Deaconess Medical Center, I oversee 4 data centers (one primary and one disaster recovery site for each institution).   Over the past several years, I've not been challenged by data center real estate, I've been challenged by power and cooling demands.

My teams have invested substantial time and effort into enhancing our power usage effectiveness (PUE) - the ratio of total power consumption including cooling and transformer losses divided by how much of the power is actually used by computing equipment.

In the graphic above, BIDMC has achieved a PUE of 1.82, which is low compared to many corporations.  We've done cold aisle containment, floor tile ventilation, and hot air recapture to reduce our Computer Room Air Conditioning (CRAC) needs substantially.  We've matched the average of most green computing initiatives.

Despite all our efforts, we are limited by the constraints of the standard commercial hardware we run and the building we use.

Facebook has designed its own buildings and created its own servers via its  Open Compute Project .   Initial power usage effectiveness ratios are 1.07, compared with an average of 1.5 for their existing facilities.

Here's an overview of how they did it.

They've removed uninterruptible power supplies and centralized chilling units, which we cannot do because of architectural/engineering limitations of our building design.   We're likely to achieve a PUE of 1.5, but could only achieve 1.07 by opening a new, fresh-built data center.

Here's a look at the kind of energy efficiency that cloud providers are achieving by creating dedicated mega data center buildings.

On April 28, I'm keynoting the Markley Group's annual meeting and you can be sure that I'll include power and cooling in my list of the things that keep me up at night.

Congratulations, Facebook!

No comments:

Post a Comment

Girls Generation - Korean