Google has had an impressive record in renewable energy. They invested over $850m dollars in renewable energy projects to do with geothermal, solar and wind energy. They entered into 20 year power purchase agreements with wind farm producers guaranteeing to buy their energy at an agreed price for twenty years giving the wind farms an income stream with which to approach investors about further investment and giving Google certainty about the price of their energy for the next twenty years – a definite win-win.
Google also set up RE < C – an ambitious research project looking at ways to make renewable energy cheaper than coal (unfortunately this project was shelved recently).
And Google set up a company called Google Energy to trade energy on the wholesale market. Google Energy buys renewable energy from renewable producers and when it has an excess over Google’s requirements, it sells this energy and gets Renewable Energy Certificates for it.
All hugely innovative stuff and all instituted under the stewardship of Google’s Green Energy Czar, Bill Weihl (on the right in the photo above).
Back in 2006 I was the co-founder of a Data Centre in Cork called Cork Internet eXchange. We decided, when building it out, that we would design and build it as a hyper energy-efficient data centre. At the time, I was also heavily involved in social media, so I had the crazy idea, well, if we are building out this data centre to be extremely energy-efficient, why not open source it? So we did.
Not only are they announcing the bringing online of their new data centre, but they are open sourcing its design, specifications and even telling people who their suppliers were, so anyone (with enough capital) can approach the same suppliers and replicate the data centre.
Facebook are calling this the OpenCompute project and they have released a fact sheet [PDF] with details on their new data center and server design.
I received a pre-briefing from Facebook yesterday where they explained the innovations which went into making their data centre so efficient and boy, have they gone to town on it.
Data centre infrastructure
On the data centre infrastructure side of things, building the facility in Prineville, Oregon (a high desert area of Oregon, 3,200 ft above sea level with mild temperatures) will mean they will be able to take advantage of a lot of free cooling. Where they can’t use free cooling, they will utilise evaporative cooling, to cool the air circulating in the data centre room. This means they won’t have any chillers on-site, which will be a significant saving in capital costs, in maintenance and in energy consumption. And in the winter, they plan to take the return warm air from the servers and use it to heat their offices!
By moving from centralised UPS plants to 48V localised UPS’s serving 6 racks (around 180 Facebook servers), Facebook were able to re-design the electricity supply system, doing away with some of the conversion processes and creating a unique 480V distribution system which provides 277V directly to each server, resulting in more efficient power usage. This system reduces power losses going in the utility to server chain, from an industry average 21-27% down to Prineville’s 7.5%.
Finally, Facebook have significantly increased the operating temperature of the data center to 80.6F (27C) – which is the upper limit of the ASHRAE standards. They also confided that in their next data centre, currently being constructed in North Carolina, they expect to run it at 85F – this will save enormously on the costs of cooling. And they claim that the reduction in the number of parts in the data center means they go from 99.999% uptime, to 99.9999% uptime.
New Server design…
Tom Raftery – Global VP, Futurist, and Innovation Evangelist for SAP, inspirational keynote speaker, and global influencer's take on how digitization and innovation are creatively disrupting our world