Tag: datacenter

Facebook and ebay’s data centers are now vastly more transparent

ebay's digital service efficiency

Facebook announced at the end of last week new way to report PUE and WUE for its datacenters.

This comes hot on the heels of ebay’s announcement of its Digital Service Efficiency dashboard – a single-screen reporting the cost, performance and environmental impact of customer buy and sell transactions on ebay.

These dashboards are a big step forward in terms of making data centers more transparent about the resources they are consuming. And about the efficiency, or otherwise, of the data centers.

Even better, both organisations are going about making their dashboards a standard, thus making their data centers cross comparable with other organisations using the same dashboard.

Facebook Prineville Data Center dashboard

There are a number of important differences between the two dashboards, however.

To start with, Facebook’s data is in near-realtime (updated every minute, with a 2.5 hour delay in the data), whereas ebay’s data is updated every quarter of a year. So, ebay’s data is nowhere near realtime.

Facebook also includes environmental data (external temperature and humidity), as well as options to review the PUE, WUE, humidity and temperature data for the last 7 days, the last 30 days, the last 90 days and the last year.

On the other hand, ebay’s dashboard is, perhaps unsurprisingly, more business focussed giving metrics like revenue per user ($54), the number of transactions per kWh (45,914), the number of active users (112.3 million), etc. Facebook makes no mention anywhere of its revenue data, user data nor its transactions per kWh.

ebay pulls ahead on the environmental front because it reports its Carbon Usage Effeftiveness (CUE) in its dashboard, whereas Facebook completely ignores this vital metric. As we’ve said here before, CUE is a far better metric for measuring how green your data center is.

Facebook does get some points for reporting its carbon footprint elsewhere, but not for these data centers. This was obviously decided at some point in the design of its dashboards, and one has to wonder why.

The last big difference between the two is in how they are trying to get their dashboards more widely used. Facebook say they will submit the code for theirs to the Opencompute repository on Github. ebay, on the other hand, launched theirs at the Green Grid Forum 2013 in Santa Clara. They also published a PDF solution paper, which is a handy backgrounder, but nothing like the equivalent of dropping your code into Github.

The two companies could learn a lot from each other on how to improve their current dashboard implementations, but more importantly, so could the rest of the industry.

What are IBM, SAP, Amazon, and the other cloud providers doing to provide these kinds of dashboards for their users? GreenQloud has had this for their users for ages, now Facebook and ebay have zoomed past them too. When Facebook contributes oits codebase to Github, then the cloud companies will have one less excuse.

Image credit nicadlr

(Cross-posted @ GreenMonk: the blog)

The Switch SuperNAP data centre – one of the most impressive I’ve been in

Switch SuperNAP data centre
If you were going to build one of the world’s largest data centre’s you wouldn’t intuitively site it in the middle of the Nevada desert but that’s where Switch sited their SuperNAPs campus. I went on a tour of the data centre recently when in Las Vegas for IBM’s Pulse 2012 event.

The data centre is impressive. And I’ve been in a lot of data centre’s (I’ve even co-founded and been part of the design team of one in Ireland).

The first thing which strikes you when visiting the SuperNAP is just how seriously they take their security. They have outlined their various security layers in some detail on their website but nothing prepares you for the reality of it. As a simple example, throughout our entire guided tour of the data centre floor space we were followed by one of Switch’s armed security officers!

The data centre itself is just over 400,000 sq ft in size with plenty of room within the campus to build out two or three more similarly sized data centres should the need arise. And although the data centre is one of the world’s largest, at 1,500 Watts per square foot it is also quite dense as well. This facilitates racks of 25kW and during the tour we were shown cages containing 40 x 25kW racks which were being handled with apparent ease by Switch’s custom cooling infrastructure.

Switch custom cooling infrastructure

Because SuperNAP wanted to build out a large scale dense data centre, they had to custom design their own cooling infrastructure. They use a hot aisle containment system with the cold air coming in from overhead and the hot air drawn out through the top of the contained aisles.

The first immediate implication of this is that there are no raised floors required in this facility. It also means that walking around the data centre, you are walking in the data centre’s cold aisle. And as part of the design of the facility, the t-scif’s (thermal seperate compartment in facility – heat containment structures) are where the contained hot aisle’s air is extracted and the external TSC600 quad process chillers systems generate the cold air externally for delivery to the data floor. This form of design means that there is no need for any water piping within the data room which is a nice feature.

Through an accident of history (involving Enron!) the SuperNAP is arguably the best connected data centre in the world, a fact they can use to the advantage of their clients when negotiating connectivity pricing. And consequently, connectivity in the SuperNAP is some of the cheapest available.

As a result of all this, the vast majority of enterprise cloud computing providers have a base in the SuperNAP. As is the 56 petabyte ebay hadoop cluster – yes, 56 petabyte!

US electricity generation

Given that I have regularly bemoaned cloud computing’s increasing energy and carbon footprint on this blog, you won’t be surprised to know that one of my first questions to Switch was about their energy provider, NV Energy.

According to NV Energy’s 2010 Sustainability Report [PDF] coal makes up 21% of the generation mix and gas accounts for another 63.3%. While 84% electricity generation from fossil fuels sounds high, the 21% figure for coal is low by US standards, as the graph on the right details.

Still, it is a long way off the 100% of electricity from renewables that Verne Global’s new data centre has.

Apart from the power generation profile, which in fairness to Switch, is outside their control (and could be considerably worse) the SuperNAP is, by far, the most impressive data centre I have ever been in.

Photo Credit Switch

by-sa

Facebook hires Google’s former Green Energy Czar Bill Weihl, and increases its commitment to renewables

Christina Page, Yahoo & Bill Weihl, Google - Green:Net 2011
Google has had an impressive record in renewable energy. They invested over $850m dollars in renewable energy projects to do with geothermal, solar and wind energy. They entered into 20 year power purchase agreements with wind farm producers guaranteeing to buy their energy at an agreed price for twenty years giving the wind farms an income stream with which to approach investors about further investment and giving Google certainty about the price of their energy for the next twenty years – a definite win-win.

Google also set up RE < C – an ambitious research project looking at ways to make renewable energy cheaper than coal (unfortunately this project was shelved recently).

And Google set up a company called Google Energy to trade energy on the wholesale market. Google Energy buys renewable energy from renewable producers and when it has an excess over Google’s requirements, it sells this energy and gets Renewable Energy Certificates for it.

All hugely innovative stuff and all instituted under the stewardship of Google’s Green Energy Czar, Bill Weihl (on the right in the photo above).

However Bill, who left Google in November, is now set to start working for Facebook this coming January.

Facebook’s commitment to renewable energy has not been particularly inspiring to-date. They drew criticism for the placement of their Prineville data center because, although it is highly energy efficient, it sources its electricity from PacificCorp, a utility which mines 9.6 million tons of coal every year! Greenpeace mounted a highly visible campaign calling on Facebook to unfriend coal using Facebook’s own platform.

Data Center War Stories talks to SAP’s Jürgen Burkhardt

And we’re back this week with the second installment in our Data Center War Stories series (sponsored by Sentilla).

This second episode in the series is with Jürgen Burkhardt, Senior Director of Data Center Operations, at SAP‘s HQ in Walldorf, Germany. I love his reference to “the purple server” (watch the video, or see the transcript below!).

Here’s a transcript of our conversation:

Tom Raftery: Hi everyone welcome to GreenMonk TV. Today we are doing a special series called the DataCenter War Stories. This series is sponsored Sentilla and with me today I have Jürgen Burkhardt. Jürgen if I remember correctly your title is Director of DataCenter Operations for SAP is that correct?

Jürgen Burkhardt: Close. Since I am 45, I am Senior Director of DataCenter Operations yes.

Tom Raftery: So Jürgen can you give us some kind of size of the scale and function of your DataCenter?

Jürgen Burkhardt: All together we have nearly 10,000 square meters raised floor. We are running 18,000 physical servers and now more than 25,000 virtual servers out of this location. The main purpose is first of all to run the production systems of SAP. The usual stuff FI, BW, CRM et cetera, they are all support systems, so if you have ABAP on to the SAP in marketplace, you, our service marketplace, this system is running here in Waldorf Rot, whatever you see from sap.com is running here to a main extent. We are running the majority of all development systems here and all training — the majority of demo and consulting system worldwide at SAP.

We have more than 20 megawatt of computing power here. I mentioned the 10,000 square meters raised floor. We have 15 — more than 15 petabyte of usable central storage, back up volume of 350 terabyte a day and more than 13,000 terabyte in our back up library.

Tom Raftery: Can you tell me what are the top issues you come across day to day in running your DataCenter, what are the big ticket items?

Jürgen Burkhardt: So one of the biggest problems we clearly have is the topic of asset management and the whole logistic process. If you have so many new servers coming in, you clearly need very, very sophisticated process, which allows you to find what we call the Purple Server, where is it, where is the special server? What kind of — what it is used for? Who owns it? How long is it already used? Do we still need it and all that kind of questions is very important for us.

And this is also very important from an infrastructure perspective, so we have so many stuff out there, if we start moving servers between locations or if we try to consolidate racks, server rooms and whatsoever, it’s absolutely required for us to know exactly where something is, who owns it, what it is used for etcetera, etcetera. And this is really one of our major challenges we have currently.

Tom Raftery: Are there any particular stories that come to mind, things issues that you’ve hit on and you’ve had to scratch your head and you’ve resolved them, that you want to talk about?

FaceBook open sources building an energy efficient data center

FaceBook's new custom-built Prineville Data Centre

(Photo credit FaceBook’s Chuck Goolsbee)

Back in 2006 I was the co-founder of a Data Centre in Cork called Cork Internet eXchange. We decided, when building it out, that we would design and build it as a hyper energy-efficient data centre. At the time, I was also heavily involved in social media, so I had the crazy idea, well, if we are building out this data centre to be extremely energy-efficient, why not open source it? So we did.

We used blogs, flickr and video to show everything from the arrival of the builders on-site to dig out the foundations, right through to the installation of customer kit and beyond. This was a first. As far as I know, no-one had done this before and to be honest, as far as I know, no-one since has replicated it. Until today.

Today, Facebook is lifting the lid on its new custom-built data centre in Prineville, Oregon.

Not only are they announcing the bringing online of their new data centre, but they are open sourcing its design, specifications and even telling people who their suppliers were, so anyone (with enough capital) can approach the same suppliers and replicate the data centre.

Facebook are calling this the OpenCompute project and they have released a fact sheet [PDF] with details on their new data center and server design.

I received a pre-briefing from Facebook yesterday where they explained the innovations which went into making their data centre so efficient and boy, have they gone to town on it.

Data centre infrastructure
On the data centre infrastructure side of things, building the facility in Prineville, Oregon (a high desert area of Oregon, 3,200 ft above sea level with mild temperatures) will mean they will be able to take advantage of a lot of free cooling. Where they can’t use free cooling, they will utilise evaporative cooling, to cool the air circulating in the data centre room. This means they won’t have any chillers on-site, which will be a significant saving in capital costs, in maintenance and in energy consumption. And in the winter, they plan to take the return warm air from the servers and use it to heat their offices!

By moving from centralised UPS plants to 48V localised UPS’s serving 6 racks (around 180 Facebook servers), Facebook were able to re-design the electricity supply system, doing away with some of the conversion processes and creating a unique 480V distribution system which provides 277V directly to each server, resulting in more efficient power usage. This system reduces power losses going in the utility to server chain, from an industry average 21-27% down to Prineville’s 7.5%.

Finally, Facebook have significantly increased the operating temperature of the data center to 80.6F (27C) – which is the upper limit of the ASHRAE standards. They also confided that in their next data centre, currently being constructed in North Carolina, they expect to run it at 85F – this will save enormously on the costs of cooling. And they claim that the reduction in the number of parts in the data center means they go from 99.999% uptime, to 99.9999% uptime.

New Server design…

Sentilla thinks of data centers, as data factories!

Data center
If you have been following this blog, you’ll know I have been profiling Data Center efficiency companies over the last few weeks. This week I take a look at Sentilla.

I talked to Sentilla’s CTO and co-founder, Joe Polastre, the other day and Joe told me that Sentilla came out of Berkeley where they had been looking at data analytics problems around large, potentially incomplete or inaccurate, streaming datasets. The challenge was how to turn that into a complete picture of what’s going on so people could make business decisions.

Sentilla takes an industrial manufacturing approach to Data Centers – in manufacturing you have power going in one side, and products and (often) waste heat coming out the other. In the same way in data centers you have power going in one side and coming out the other side you have the product (compute cycles) and waste heat. To optimise your data center you need to get the maximum data/compute (product) output with the minimum power in and the least waste heat generated. Sentilla thinks of data centers, as data factories!

Unlike most of the data center people I have talked to, Sentilla don’t talk so much about energy savings. Instead they emphasise maximising performance – getting the most out of your existing data centers, your existing storage, your existing servers, your existing power supply. By far the greatest saving from deploying Sentilla, Joe claimed, is not from the energy savings. That pales in comparison to the capital deferment savings gained from being able to delay the building of extra data center facilities by however many years, he said.

So how does Sentilla help?

Well Sentilla analyses the energy profile of every asset in the data center, whether metered or not, and makes recommendations to improve the planning and management of data center operations…

Power Assure automates the reduction of data center power consumption

Data centre
If you’ve been following this blog in the last couple of weeks you’ll have noticed that I have profiled a couple of data centre energy management companies – well, today it is the turn of Power Assure.

The last time I talked to Power Assure was two years ago and they were still very early stage. At that time I talked to co-founder and CTO, Clemens Pfeiffer, this time I spoke with Power Assure’s President and CEO, Brad Wurtz.

The spin that Power Assure put on their energy management software is that, not only do they offer their Dynamic Power Management solution which provides realtime monitoring and analytics of power consumption across multiple sites, but their Dynamic Power Optimization application automatically reduces power consumption.

How does it do that?

Well, according to Brad, clients put an appliance in each of the data centres they are interested in optimising (Power Assure’s target customer base are large organisations with multiple data centres – government, financial services, healthcare, insurance, telco’s, etc.). The appliance uses the management network to gather data – data may come from devices (servers, PDU’s, UPS’s, chillers, etc.) directly, or more frequently, it gathers data directly from multiple existing databases (i.e. a Tivoli db, a BMS, an existing power monitoring system, and/or inventory system) and performs Data Centre analytics on those data.

The optimisation module links into existing system management software to measures and track energy demand on a per applications basis in realtime. It then calculates the amount of compute capacity required to meet the service level agreements of that application and adds a little bit of headroom…

Friday Morning Green Numbers round-up 02/12/2010

Green numbers
Photo credit Unhindered by Talent

Here is this Friday’s Green Numbers round-up:

  • Iberdrola Renovables SA, the world’s largest operator of wind parks, agreed to buy Spain’s largest wind farm from Gamesa Corporacion Tecnologica SA.

    Renovables, based in Valencia, paid Gamesa 320 million euros ($441 million) for 244 megawatts of power capacity in Andevalo, Spain

    tags: iberdrola, iberdrola renovables, gamesa, Wind farm, greennumbers

  • IBM recently ran a ‘Jam’ – an online discussion – on environmental sustainability and why it is important for CIOs, CEOs and CFOs to address it. The Jam involved thousands of practitioners and subject matter experts from some 200 organisations. It focused primarily on business issues and practical actions.

    Take a look at the check list below and it becomes rapidly apparent, C-level management need to tackle the issue before it is foisted upon them.

    IBM’s Institute for Business Value will fully analyse the 2080 Jam contributions, but this is the essential CIO checklist derived from comments made during the Eco-Jam.

    tags: ibm, ecojam, eco jam, cio, greennumbers

  • Data centers are, thankfully, getting a lot of attention when it comes to making them more efficient. Considering that roughly 60% of the electricity used at a data center goes to keeping the servers cool, focusing on smart cooling tactics is essential. HP has taken this to heart and has opened it’s first wind-cooled data center, and it’s the company’s most efficient data center to date.

    In this piece, HP claims that their data center is the world’s first wind-cooled data center – I’m not sure just how valid this is as I have heard BT only do wind-cooled data centers!

    tags: hp, bt, data center, datacenter, wind cooled, air cooled, greennumbers

  • “Sir Richard Branson and fellow leading businessmen will warn ministers this week that the world is running out of oil and faces an oil crunch within five years.

    The founder of the Virgin group, whose rail, airline and travel companies are sensitive to energy prices, will say that the ­coming crisis could be even more serious than the credit crunch.

    “The next five years will see us face another crunch – the oil crunch. This time, we do have the chance to prepare. The challenge is to use that time well,” Branson will say.”

    tags: richard branson, oil crunch, peak oil, virgin, greennumbers

  • “Fertile soil is being lost faster than it can be replenished making it much harder to grow crops around the world, according to a study by the University of Sydney.

    The study, reported in The Daily Telegraph, claims bad soil mismanagement, climate change and rising populations are leading to a decline in suitable farming soil.

    An estimated 75 billion tonnes of soil is lost annually with more than 80 per cent of the world’s farming land “moderately or severely eroded”, the report found.

    Soil is being lost in China 57 times faster than it can be replaced through natural processes, in Europe 17 times faster and in America 10 times faster.

    The study said all suitable farming soil could vanish within 60 years if quick action was not taken, leading to a global food crisis.”

    tags: greennumbers, soil, topsoil, soil fertility

  • In response to an environmental lawsuit filed against the oil giant, Chevron has fortified its defenses with at least twelve different public relations firms whose purpose is to debunk the claims made against the company by indigenous people living in the Amazon forests of Ecuador. According to them, Chevron dumped billions of gallons of toxic waste in the Amazon between 1964 and 1990, causing damages assessed at more than $27 billion.

    tags: chevron, ecuador, greennumbers, amazon rainforest, amazon, toxic waste, pollution

  • Indian mobile phone and commodity export firm Airvoice Group has formed a joint venture with public sector body Satluj Jal Vidyut Nigam to build 13GW of solar and wind capacity in a sparsely populated part of Karnataka district in south west India.

    The joint venture is budgeting to invest $50 billion over a period of 10 years, claiming it to be the largest single renewable energy project in the world.

    tags: greennumbers, india, airvoice, solar, wind, renewables, karnataka, renewable energy

  • Using coal for electricity produces CO2, and climate policy aims to prevent greenhouse gases from hurting our habitat. But it also produces SOx and NOx and particulate matter that have immediate health dangers.

    A University of Wisconsin study was able to put an economic value on just the immediate health benefits of enacting climate policy. Implications of incorporating air-quality co-benefits into climate change policymaking found coal is really costing us about $40 per each ton of CO2.

    tags: greennumbers, coal, sox, nox, particulate matter, greenhouse gases, health

Posted from Diigo. The rest of my favorite links are here.

by-sa