Category: Featured Posts

SAP Afaria in the Cloud – enterprise functionality, consumer pricing


Broken SmartPhone

One of the most interesting announcements which came out of SAP’s SapphireNow conference in Orlando last week was the Afaria in the Cloud update. This is a real game-changer (an expression we use very rarely) for a number of reasons.

Afaria, if you are not familiar, is SAP’s mobile device management (MDM) product. What does that mean – it means Afaria secures, monitors and manages all types of mobile devices (smartphones, tablet computers, mobile POS devices, etc.). Because mobile is making organisations far more efficient, as we’ve written previously here, more and more industries are deploying them. And thus the need for MDM solutions to protect mobile devices, to reduce risk and increase employee productivity.

Typical MDM functionality allows for over the air (OTA) updates, remote tracking and wiping in the event that the device is stolen, and sandboxing of personal and work-related mobile functionality, for example.

During the announcement at SapphireNow, one of the light-hearted potential usage scenarios mentioned was that as a reward for hitting sales targets an employee might be allowed to play Angry Birds for a set duration.

The fact that SAP are now offering this as a cloud option is significant because MDM offerings typically require a server to control the devices. There can be significant cost and time factors associated with the purchase and deployment of the MDM server. This is done away with with the cloud version. But still, this isn’t entirely game-changing, right?

No, the real game-changer came when SAP announced the price for Afaria in the cloud – €1 per device, per month. And it is possible to trial it for free for 30 days. Sitting in the announcement it occurred to us that that kind of price makes Afaria in the Cloud suddenly attractive, not just to organisations, but also to regular parents looking to keep their children’s mobile devices safe.

As far as we know, this is the first time SAP have offered a product at such a low price point for enterprise customers. This pricing is almost as if SAP were aiming it squarely at the consumer app market. I know if I had an option to safeguard my kids mobile devices for €1 per device per month, I’d grab it. In a heartbeat. Unfortunately we can’t test Afaria as the free trial registration page doesn’t include European countries in its list of available countries. Yet. Although countries like Vanuatu, Uzbekistan and even Somalia get to try it out :-(

It seems SAP is getting very aggressive in its cloud pricing options. We’ve heard that the TwoGo ride-sharing app will be similarly priced (€1 per user, per month) when it’s official pricing is eventually published.

Cloud price wars anyone?

Image credit Tom Raftery

(Cross-posted @ GreenMonk: the blog)

SAP TwoGo – ride-sharing software for the enterprise


In a less than obvious move earlier this week, SAP launched a ride-sharing app called TwoGo.

Why less than obvious? Well, ride-sharing is generally perceived as more of a consumer focused activity, than an enterprise one. And SAP is very much an Enterprise software company.

iPhone Rideshare apps

A quick search for ride-share iPhone apps, for example returns 24 results, all of which are consumer software plays.

TwoGo is more than just a smartphone app though (it is available on most mobile platforms), TwoGo customers can also access it through its website, via email, via any iCal enabled calendar application, and even via SMS.

It is a single instance, multi-tenant cloud application. This is important because it means for any organisations deploying TwoGo, set-up on SAP’s side simply involves adding the organisations email domain to the customer table. Then employees are immediately enabled to create a TwoGo account by signing up with their work email address.

Also, because it is single-instance and multi-tenant, smaller companies can sign up and benefit from sharing rides with employees of other companies in the area who are also TwoGo subscribers.

And because TwoGo works with email, and iCal already, integration issues are minimal.

Why would an organisation want to deploy a ride-sharing app, you ask?
There are several good reasons –

  • if companies are subsidising travel for employees, ride-sharing reduces the number of trips taken by employees, thereby contributing directly to the organisation’s bottom line.
  • For organisations with vehicle fleets, this also reduces wear and tear, service and maintenance costs for vehicles.
  • Then there’s the issue of having to provide car parking spaces for employees – this is expensive and a poor use of the space. Reducing the number of cars coming to work, de-facto reduces the amount of car parking spaces an organisation needs to provide.
  • And, obviously, ride-sharing will also reduce the organisation’s greenhouse gas emissions.

Then there’s the more intangible benefits –

  • Employees spending more time together leads to serendipitous meetings – what was previously ‘dead time’ in the car can now be productive
  • And it brings employees closer to each other and to the company

What about employees though – what benefits can they get from ride-sharing?
Carpool lane sign

  • The obvious one is the ability to use carpool lanes on freeways where traffic often moves significantly faster
  • Also, according to the US Census Bureau, nearly 600,000 Americans have “mega-commutes” of at least 90 minutes and 50 miles each way to work. A significant number of those would benefit from ride-sharing because of reduced costs (fuel and automobile wear and tear) and also to share the driving load. Driving, especially in heavy traffic, is frustrating.
  • Then there’s the social benefits of meeting new people, making new friends and learning more about other job functions in your organisation.

TwoGo, although just now being released, has been in operation at SAP for 2 years now. It is at release number 4.5, so this is already a mature product. SAP themselves report that TwoGo has generated more than $5 million in value, reduced greenhouse gas emissions by eliminating 400,000 miles of driving, and matched employees into carpools more than 36,000 times, creating 2,200 additional days of networking time among employees.

The app is highly configurable and has clever algorithms which only offer a user a ride to work, if it can also offer him/her a ride home that evening, as well. And obviously, the app has block lists to ensure you are not repeatedly offered lifts with someone you’d rather avoid.

Given all the benefits of TwoGo, we have to wonder why other enterprise software vendors haven’t come up with a similar product before now. Or have they? Does TwoGo have an enterprise competitor we’re not aware of?

Carpool lane image credit Lady Madonna

 

(Cross-posted @ GreenMonk: the blog)

Microsoft, big data and smarter buildings

Smarter building dashboard

If you checked out the New York Times Snow Fall site (the story of the Avalanche at Tunnel Creek), then Microsoft’s new 88 Acres site will look familiar. If you haven’t seen the Snow Fall site then go check it out, it is a beautiful and sensitive telling of a tragic story. You won’t regret the few minutes you spend viewing it.

Microsoft’s 88 Acres is an obvious homage to that site, except that it tells a good news story, thankfully, and tells it well. It is the story of how Microsoft is turning its 125-building Redmond HQ into a smart corporate campus.

Microsoft’s campus had been built over several decades with little thought given to integrating the building management systems there. When Darrell Smith, Microsoft’s director of facilities and energy joined the company in 2008, he priced a ‘rip and replace’ option to get the disparate systems talking to each other but when it came in at in excess of $60m, he decided they needed to brew their own. And that’s just what they did.

Using Microsoft’s own software they built a system capable of taking in the data from the over 30,000 sensors throughout the campus and detecting and reporting on anomalies. They first piloted the solution on 13 buildings on the campus and as they explain on the 88 Acres site:

In one building garage, exhaust fans had been mistakenly left on for a year (to the tune of $66,000 of wasted energy). Within moments of coming online, the smart buildings solution sniffed out this fault and the problem was corrected.
In another building, the software informed engineers about a pressurization issue in a chilled water system. The problem took less than five minutes to fix, resulting in $12,000 of savings each year.
Those fixes were just the beginning.

The system balances factors like the cost of a fix, the money that will be saved by the fix, and the disruption a fix will have on employees. It then prioritises the issues it finds based on these factors.

Microsoft facilities engineer Jonathan Grove sums up how the new system changes his job “I used to spend 70 percent of my time gathering and compiling data and only about 30 percent of my time doing engineering,” Grove says. “Our smart buildings work serves up data for me in easily consumable formats, so now I get to spend 95 percent of my time doing engineering, which is great.”

The facilities team are now dealing with enormous quantities of data. According to Microsoft, the 125 buildings contain 2,000,000 data points outputting around 500,000,000 data transactions every 24 hours. The charts, graphics and reports it produces leads to about 32,300 work orders being issued per quarter. And 48% of the faults found are corrected within 60 seconds. Microsoft forecasts energy savings of 6-10% per year, with an implementation payback of 18 months.

Because Microsoft’s smart building tool was built using off the shelf Microsoft technologies, it is now being productised and will be offered for sale. It joins a slew of other smarter building software solutions currently on the market but given this one is built with basic Microsoft technologies, it will be interesting to see where it comes in terms of pricing.

One thing is for sure, given that buildings consume around 40% of our energy, any new entrant into the smarter buildings arena is to be welcomed.

Image credit nicadlr

 

(Cross-posted @ GreenMonk: the blog)

Cloud computing’s lack of transparency – an update

SAP co-CEO Jim Hagemann Snabe
We have been talking  on GreenMonk about the lack of transparency from Cloud vendors for some time now, but our persistence is starting to pay off, it appears!

Some recent conversations we’ve had with people in this space are starting to prove very positive.

We’ve had talks with GreenQloud. GreenQloud are based in Iceland, so their electricity is 100% renewable (30% geothermal and 70% hydro). They already measure and report to their customers the carbon footprint of their cloud consumption – so what discussions did we have with them? Well, GreenQloud use the open source CloudStack platform to manage their cloud infrastructure. Given that CloudStack is open source, and we’ve previously suggested that Open Source Cloud Platforms should be hacked for Energy and Emissions reporting, we suggested to GreenQloud that they contribute their code back into the CloudStack project. They were very open to the idea. Watch this space.

We’ve also met with CloudSigma, an IaaS provider based in Switzerland. CloudSigma were very interested when I raised this discussion with them at the GigaOm Structure event in Amsterdam earlier this year and they hope to have energy and emissions reporting ready to demonstrate very soon. In a way though, the discussions with CloudSigma went much as expected. We were after all, preaching to the converted. CloudSigma have a good environmental track record having announced that they are carbon neutral back in June 2010.

And finally, last week at the SapphireNow event in Madrid, we had a discussion about cloud providers lack of transparency with Jim Hagemann Snabe, co-CEO of SAP. Jim is an interesting guy. We’ve been covering SAP events for several years now, and every time we’ve heard Jim get up to speak, within the first few sentences he references resource constraints and sustainability. He drives an electric car. He’s totally bought into being green. He’s also a proponent of transparency. So when we raised the issue of the lack of transparency with Jim, his eyes light up and he got all excited. We had a great conversation on the topic which he concluded by saying “I want SAP to be a leader in this space”.

All very positive stuff, still no actual movement but things appear to be going in the right direction.

Image credits Tom Raftery

 

(Cross-posted @ GreenMonk: the blog)

Why are Salesforce hiding the emissions of their cloud?

Salesforce incorrect carbon data
The lack of transparency from Cloud computing providers is something we have discussed many times on this blog – today we thought we’d highlight an example.

Salesforce dedicates a significant portion of its site to Sustainability and on “Using cloud computing to benefit our environment”. They even have nice calculators and graphs of how Green they are. This all sounds very promising, especially the part where they mention that you can “Reduce your IT emissions by 95%”, so where is the data to back up these claims? Unfortunately, the data is either inaccurate or missing altogether.

For example, Salesforce’s carbon calculator (screen shot above) tells us that if an organisation based in Europe moves its existing IT platform (with 10,000+ users) to the Salesforce cloud, it will reduce its carbon emissions by 87%.

This is highly suspect. Salesforce’s data centers are in the US (over 42% of electricity generated in the US comes from coal) and Singapore where all but 2.6% of electricity comes from petroleum and natural gas [PDF].

On the other hand, if an organisation’s on premise IT platform in Europe is based in France, it is powered roughly 80% by nuclear power which has a very low carbon footprint. If it is based in Spain, Spain generates almost 40% of its power from renewables [PDF]. Any move from there to Salesforce cloud will almost certainly lead to a significant increase in carbon emissions, not a reduction, and certainly not a reduction of 87% as Salesforce’s calculator claims above.

Salesforce incorrect carbon data

Salesforce also has a Daily Carbon Savings page. Where to start?

To begin with, the first time we took a screen shot of this page was on October 1st for slide 26 of this slide deck. The screen shot on the right was taken this morning. As you can see, the “Daily Carbon Savings” data hasn’t updated a single day in the meantime. It is now over two months out-of-date. But that’s probably just because of a glitch which is far down Salesforce’s bug list.

The bigger issue here is that Salesforce is reporting on carbon savings, not on its carbon emissions. Why? We’ve already seen (above) that their calculations around carbon savings are shaky, at best. Why are they not reporting the much more useful metric of carbon emissions? Is it because their calculations of emissions are equally shaky? Or, is it that Salesforce are ashamed of the amount of carbon they are emitting given they have sited their data centers in carbon intensive areas?

We won’t know the answer to these questions until Salesforce finally do start reporting the carbon emissions of its cloud infrastructure. In a meaningful way.

Is that likely to happen? Yes, absolutely.

When? That’s up to Salesforce. They can choose to be a leader in this space, or they can choose to continue to hide behind data obfuscation until they are forced by either regulation, or competitive pressure to publish their emissions.

If we were Salesforce, we’d be looking to lead.

Image credits Tom Raftery

Enhanced by Zemanta

(Cross-posted @ GreenMonk: the blog)

Sustainability, social media and big data

The term Big Data is becoming the buzz word du jour in IT these days popping up everywhere, but with good reason – more and more data is being collected, curated and analysed today, than ever before.

Dick Costolo, CEO of Twitter announced last week that Twitter is now publishing 500 million tweets per day. Not alone is Twitter publishing them though, it is organising them and storing them in perpetuity. That’s a lot of storage, and 500 million tweets per day (and rising) is big data, no doubt.

And Facebook similarly announced that 2.5 billion content items are shared per day on its platform, and it records 2.7 billion Likes per day. Now that’s big data.

But for really big data, it is hard to beat the fact that CERN’s Large Hadron Collider creates 1 petabyte of information every second!

And this has what to do with Sustainability, I hear you ask.

Well, it is all about the information you can extract from that data – and there are some fascinating use cases starting to emerge.

A study published in the American Journal of Tropical Medicine and Hygiene found that Twitter was as accurate as official sources in tracking the cholera epidemic in Haiti in the wake of the deadly earthquake there. The big difference between Twitter as a predictor of this epidemic and the official sources is that Twitter was 2 weeks faster at predicting it. There’s a lot of good that can be done in crisis situations with a two week head start.

Another fascinating use case I came across is using social media as an early predictor of faults in automobiles. A social media monitoring tool developed by Virginia Tech’s Pamplin College of Business can provide car makers with an efficient way to discover and classify vehicle defects. Again, although at early stages of development yet, it shows promising results, and anything which can improve the safety of automobiles can have a very large impact (no pun!).

GE's Grid IQ Insight social media monitoring tool

GE have come up with another fascinating way to mine big data for good. Their Grid IQ Insight tool, slated for release next year, can mine social media for mentions of electrical outages. When those posts are geotagged (as many social media posts now are), utilities using Grid IQ Insight can get an early notification of an outage in its area. Clusters of mentions can help with confirmation and localisation. Photos or videos added of trees down, or (as in this photo) of a fire in a substation can help the utility decide which personnel and equipment to add to the truckroll to repair the fault. Speeding up the repair process and getting customers back on a working electricity grid once again can be critical in an age where so many of our devices rely on electricity to operate.

Finally, many companies are now using products like Radian6 (now re-branded as Salesforce Marketing Cloud) to actively monitor social media for mentions of their brand, so they can respond in a timely manner. Gatorade in the video above is one good example. So too are Dell. Dell have a Social Media Listening Command Centre which is staffed by 70 employees who listen for and respond to mentions of Dell products 24 hours a day in 11 languages (English, plus Japanese, Chinese, Portugese, Spanish, French, German, Norwegian, Danish, Swedish, and Korean). The sustainability angle of this story is that Dell took their learnings from setting up this command centre and used them to help the American Red Cross set up a similar command centre. Dell also contributed funding and equipment to help get his off the ground.

No doubt the Command Centre is proving itself invaluable to the American Red Cross this week mining big data to help people in need in the aftermath of Hurricane Sandy.

(Cross-posted @ GreenMonk: the blog)

The Switch SuperNAP data centre – one of the most impressive I’ve been in

Switch SuperNAP data centre
If you were going to build one of the world’s largest data centre’s you wouldn’t intuitively site it in the middle of the Nevada desert but that’s where Switch sited their SuperNAPs campus. I went on a tour of the data centre recently when in Las Vegas for IBM’s Pulse 2012 event.

The data centre is impressive. And I’ve been in a lot of data centre’s (I’ve even co-founded and been part of the design team of one in Ireland).

The first thing which strikes you when visiting the SuperNAP is just how seriously they take their security. They have outlined their various security layers in some detail on their website but nothing prepares you for the reality of it. As a simple example, throughout our entire guided tour of the data centre floor space we were followed by one of Switch’s armed security officers!

The data centre itself is just over 400,000 sq ft in size with plenty of room within the campus to build out two or three more similarly sized data centres should the need arise. And although the data centre is one of the world’s largest, at 1,500 Watts per square foot it is also quite dense as well. This facilitates racks of 25kW and during the tour we were shown cages containing 40 x 25kW racks which were being handled with apparent ease by Switch’s custom cooling infrastructure.

Switch custom cooling infrastructure

Because SuperNAP wanted to build out a large scale dense data centre, they had to custom design their own cooling infrastructure. They use a hot aisle containment system with the cold air coming in from overhead and the hot air drawn out through the top of the contained aisles.

The first immediate implication of this is that there are no raised floors required in this facility. It also means that walking around the data centre, you are walking in the data centre’s cold aisle. And as part of the design of the facility, the t-scif’s (thermal seperate compartment in facility – heat containment structures) are where the contained hot aisle’s air is extracted and the external TSC600 quad process chillers systems generate the cold air externally for delivery to the data floor. This form of design means that there is no need for any water piping within the data room which is a nice feature.

Through an accident of history (involving Enron!) the SuperNAP is arguably the best connected data centre in the world, a fact they can use to the advantage of their clients when negotiating connectivity pricing. And consequently, connectivity in the SuperNAP is some of the cheapest available.

As a result of all this, the vast majority of enterprise cloud computing providers have a base in the SuperNAP. As is the 56 petabyte ebay hadoop cluster – yes, 56 petabyte!

US electricity generation

Given that I have regularly bemoaned cloud computing’s increasing energy and carbon footprint on this blog, you won’t be surprised to know that one of my first questions to Switch was about their energy provider, NV Energy.

According to NV Energy’s 2010 Sustainability Report [PDF] coal makes up 21% of the generation mix and gas accounts for another 63.3%. While 84% electricity generation from fossil fuels sounds high, the 21% figure for coal is low by US standards, as the graph on the right details.

Still, it is a long way off the 100% of electricity from renewables that Verne Global’s new data centre has.

Apart from the power generation profile, which in fairness to Switch, is outside their control (and could be considerably worse) the SuperNAP is, by far, the most impressive data centre I have ever been in.

Photo Credit Switch

by-sa

Is Cloud Computing Green?

I gave the keynote address at the Digital Trends 2011 event organised by HePIS and CEPIS in Athens recently. My talk was on Cloud Computing’s Green Potential and in my presentation, I claimed that Cloud Computing is NOT Green.

I started the talk by explaining what Cloud Computing is and the many advantages it can bring to companies. However, because none of the Cloud providers are publishing energy figures around Cloud computing, we can’t say whether or not Cloud computing is energy efficient.

I went on to point out that even if Cloud is energy efficient (and we have no proof that it is), that is not the same thing as being Green.

My slides are available on my SlideShare account and a transcript of my talk is here:

Okay, so my talk this morning is on Cloud Computing and its Green Potential. So a quick couple of words about myself.

So my name is Tom Raftery, I work for an industry analyst firm called RedMonk. My area of interest within RedMonk or the area I specialize in is energy and sustainability. We have termed the practice within RedMonk that concentrates on energy and sustainability GreenMonk. So the place that I blog at is at GreenMonk.net.

And a little bit about my past. I worked in an organization called Zenith Solutions back in the 90s and early 2000s, and Zenith Solutions was a software company creating what has now become termed cloud applications. At that time we called them web applications, they were web based software with the database backend online.

Then I worked for a company called Chip Electronics in the early 2000s and Chip Electronics was again a company which created Enterprise Resource Planning, ERP applications which were cloud delivered, at the time we called it Software as a Service. No at the time we called it active service provisions, since become Software as a Service. And I am also a co-founder and Director of CIX, which is a hyper energy efficient data center based in Cork in Ireland. So I know both from the hardware side and the software side.

I mentioned my blog on GreenMonk.net, I am on Twitter and twitter.com/tomraftery. My email address is there, my mobile phone number is there, please don’t ring it now. And this site here, slideshare.net the last line there and I am sorry for the bullet points, I don’t normally use them, but I did just here and in one other slide. slideshare.net is a site where you can upload a presentations.

So, this presentation I am giving this morning, I uploaded it to SlideShare earlier this morning, so it’s already online there at that site and if you go there now you’ll see it has already been viewed over 277 times so far. So, it’s a great site for getting your talks out all available, it’s also downloadable there.

One thing you’ll notice as well about the structure of my talks is a lot of them have images like this, but they also have this bit of text at the bottom which you can’t read, don’t try right now, but what they are is those are links to the source material. So, if at any point you do download the presentation you can go and click on the links, they are clickable links you can click on them and see where I’ve got the information from.

So that’s me, who are you guys?

A couple of questions, so how many people here have deployed applications to the cloud? Not very many. How many plan to? A few more, okay. How many people here think that cloud computing is green? Okay, good few people. Right. I hope to burst that bubble, unlike Nancy who spoke just a minute ago, I am not a, I am not a believer that cloud computing is green and I hope to explain why. I am a huge fan of cloud computing, I have to say, I use it extensively, going back to the slide for a second.

The Chip application, the Zenith stuff, the GreenMonk, Twitter, SlideShare even my email are all cloud delivered. Our organization RedMonk we use Google applications for domains for our email, so my email is cloud delivered as well. So I am a big user of and believer in cloud for lots of things. But I just don’t happen to believe it’s green.

So what is cloud computing? Well at kind of first blush it’s software that’s delivered in a browser, so that’s an very easy definition of it, something we can all kind of sign up to it. It’s a lot more complex than that at various other levels and I’ll go through a couple of those other levels as well, just very briefly to kind of give you that the kind of complexity that’s involved in it, but I am not going to go into any great depth. So it’s also nothing that’s very new, this is the original sign up screen for Hotmail.

Hotmail was an email application developed and sold to Microsoft back in ’97 for $450 million if memory serves. But this was before it was sold to Microsoft, this was the original sign up screen when they launched in July ’96 and it was one of the first widely used Software as a Service or cloud application.

So cloud is nothing new, it keeps getting rebranded, so the cloud name is newish alright, but the delivery mechanism is not that new. It actually harps back to mainframe computing back in the 60s.

So there are several types of cloud computing and the first type, the first level of cloud computing is kind of Software as a Service. That’s where you kind of take your packaged software and convert it into something as I mentioned already delivered in a browser. And I mean you probably are aware of these I mentioned Hotmail and its analogs the Google applications, there is also Zoho, there is social networking the Twitter that I mentioned, SlideShare all these kind of things, they are all Software as a Service.

So they are just basic applications that you access through a browser. But you can go back one level of abstraction from that to where you get to what’s called platform as a service. And don’t worry about these acronyms basically a lot of the times you don’t need to know this stuff, the platform as a service stuff is where you, as I could say, you go back one level of abstraction and you give people a platform on which to deploy cloud applications.

And the kind of platforms that you can get are ones like the Google app engine and Amazon and Microsoft’s Azure, these are the kind of platforms that are available if you want go down that route. Most people don’t need to go there, but if you do that kind of stuff is available as well. And then you can go back one further level of abstraction where you are actually delivering Hardware as a Service and this is called Hardware as a Service or Infrastructure as a Service and both names are valid, HaaS for hardware or IaaS for infrastructure as a service and that’s where you’re delivering stuff like networking, storage, compute, CPU cycles that kind of thing as a service.

And VMware, Rackspace, OpenStack again Amazon with their EC2 and their S3 services are those kinds of types of cloud computing. If that’s a little confusing and I know it can be, this is a slide which is also confusing, but if you actually stop and study it in your own time, you could download this application and if you are interested about it, this is a good way of seeing how the different types of cloud computing stack up as it were.

So over here on the very left, you have your traditional packaged software with the entire stack from networking up through applications where you manage the entire stack on your machine. So that’s the traditional Microsoft Office whatever applications, you do the whole thing.

Over on the other side you got your Software as a Service, something like Google apps or domains or one of these things where the provider the Google or whoever are responsible for the entire stack, all you have is a browser. And then in the middle you have the two other ones, the platform as a service, where the vendor managers up to here and you manage the applications and data or infrastructure where the vendor managers is just this part and you manage the rest.

So that’s the kind of way it stacks up. As I say on the deck itself there is a link down there to where you can find that image if you are interested in checking into it. It’s quite a nice way of seeing the differences between the different types of cloud computing.

And then just to complicate things a little further, there are different deployment mechanisms. You can have private cloud, private cloud is hosted by yourself on your own infrastructure behind your own firewall. You can have public cloud which is what most people are familiar with or you can have a hybrid where you have some stuff private, some stuff public and that’s one that a lot of people are looking at, because it means you can have your data behind your firewall, but the functionality you are accessing it from public. So your stuff remains on premise.

And that’s quite important, because as Nancy alluded to, there can be a lot of issues with the data in cloud computing, because for example if you are a European company do you really want your data hosted on servers in US territories where for example the data privacy laws are a lot more lax. So I have spoken to several European companies who have said categorically they will not use cloud computing if their data is going to be hosted in US territories. It’s only if it’s in the EU and only if they know where in the EU. So you are noticing cloud providers taking that on board and starting to become aware of those issues and while they can’t change US law, they can start providing storage mechanisms that they are guaranteed to be in region.

So that’s cloud computing and the next question we get to is, is this really energy efficient because lots of people say it is and even Nancy alluded to that report from the Carbon Disclosure Project which I’ll blow apart in a minute. They aren’t the only ones Microsoft, Accenture and WSP environment brought out this story in November of last year. And this is the actual title of the story, where they say it shows significant energy in carbon emissions reduction potential from cloud computing and again the link to the report is down there at the bottom.

The difficulties I have with that are several, first is Microsoft are a cloud computing provider so they kind of skin in the game. The second is that, they don’t actually use any hard data, it’s all imputed. And the third is that after months and months of work from all these people the best they could come up with is they could say it has potential. Yeah it has potential to end world hunger and bring on world peace and fix the euro, anything kind of potential. So that’s a non-report.

Cloud computing has phenomenal advantages, don’t get me wrong, I am a big fan. So if you are into traditional IT, you know well that if you are deploying a new application or a new server it’s pain staking, you have to go through an RFP process, a tender process, PO process. You have to put, you have to go to tender and you have to get that — when you have to place the order, the order then can take several weeks from the supplier. When it comes in, it goes into the logistics area, if you got to get the guys in warehousing to tell you where the server is, you have to get the server, you have got to put the company image on the server, you got to install the applications, you got to do testing, you got to patch the server, the list goes on and on. Basically you want to deploy a new server, it’s a process that can take weeks to months.

You deploy a cloud application, there is usually no RFP and no PO process because there is — the capital cost is minimal. So typically the time to deploy for a cloud application shrinks from weeks to months to hours to minutes depending on what you are deploying, so phenomenal, cloud is fantastic for streamlining that kind of stuff.

It’s also great for what’s called dynamic provisioning. So this is the Alexa graph, the website traffic of a website for the Australian Open. The Australian Open is a big tennis competition happens in Australia every January. So you’ll notice 11 months of the year no traffic to the site, come December, January vroom, spike, that’s 2006, 2007, larger spike 2008, larger spike and the spikes keep getting bigger as you go in that direction.

So if you were the website owner for the AustralianOpen.com website you would need to have — if there were no cloud computing options you would need to have servers that could hit and deal with the traffic at this growing spike for 12 months of the year when the traffic is only there one month of the year. But with dynamic provisioning and cloud computing you can use the elasticity of the cloud to turn up the resources assigned to that site as the traffic starts to build up in December and January and then as the traffic falls off, you turn it back down again.

So in that respect cloud computing is fantastic as well, you are not using resources needlessly. You’ve also got the idea of multi-tenancy and if you can’t see what’s in this picture it’s actually a Mini Cooper with 26 people inside in her, EMC sponsored it as the world record attempt to fit people into a Mini Cooper and they fit 26 people into it. So they stuff people into it with multi-tenancy in cloud computing it means you are sharing applications across companies, lots of different companies often competitors are using the same single version of the application.

And that’s fantastic, that adds greater value. You know, you have only one instance of the application which is great as well for updates, updates of the application are instantly deployed. You know, you don’t have to download the latest update and apply it to the test server and make sure it works in the environment, the whole thing, you know, it’s just instantly on.

This is the issue of server utilization which again Nancy referred to, Nancy you stole my talk, come on. So this is a typical graph of server utilization and you can see this the memory part, but this is the server utilization and it’s at zero percent here. And well that’s a bit of a outlier, you’ll often and get in normal server, you’ll often get utilizations in single digits 7, 8% server utilization for traditional servers in data center environments. But with the advent of virtualization and cloud computing you can ramp that up significantly. So that should be quite energy efficient.

Then you have got this kind of outlier thing called chasing the moon, which you may or may not have heard off. It’s one I am kind of found of as an idea, but not many people have deployed it yet. People are kind of talking about it as out there, and what it is, is with cloud computing if you’ve got data centers in say, US, West Coast, another in Northern Europe or Southern Europe, Northern European typically because it’s cooler there and cooler I mean colder not more ‘hip’. And you’ve got another data center say somewhere in Asia or Eastern Russia. Then you’ve got the time zones covered about eight hours apart. So if you have an application in those three centers, you can move the compute to where energy is cheapest at any particular point in time. So if you are doing that typically energy is cheapest when it’s in highest supply, when it’s in highest supply and it’s cheapest, its actually, this is on the wholesale markets, it’s actually greenest as well.

So when electricity is at its cheapest, it’s actually also at its greenest that’s – it’s kind of counter intuitive but I can explain that if any one who is interested later.

So if you move your compute to where the energy and the compute is cheapest at any point in time, it’s typically night time when wind is blowing and at that time you are chasing the moon, you are putting your applications wherever the moon is out, it’s called chasing the moon.

And so it’s something you could only do — something that’s only made possible by the likes of cloud computing. Your information is ubiquitous, it’s wherever you have an internet connection, so your road warriors, your sales people on the road, can access the application while sitting up in the beach.

It also enables a lot more home working, homeshoring, teleworking whatever you want to call it. And people like ATT, IBM, lots of big companies are huge fans of this. IBM reported a couple of years back that 25% of their employees did teleworking and those 25% were saving IBM $700 million a year. That’s significant savings and a lot of that savings comes from a lower real estate footprint and a lower energy footprint because of the lower real estate footprints.

So is it energy efficient or lot of those savings coming from less commuting or from less building stock or are they from offsetting your energy? So if you are working from home you are still burning energy, it’s just not in your company’s building, your company isn’t accounting for it anymore. These are kind of questions we are not sure of, there hasn’t been any definitive studies either way, and it’s difficult anyway because it differs in every company and every geography.

One huge problem I have with cloud computing and people saying that cloud computing is energy efficient is that none of the cloud providers are publishing data around their energy utilization, not one of them. So I often do a kind of a hands up exercise at this point and I don’t know if it’ll work here, because very few people admitted that they were going to be putting stuff in the cloud, but let’s raise hands again. Hands up everyone who has or plan to deploy applications to the cloud? Okay, so keep your hands up, keep your hands up. Now keep your hands up if you know the current energy utilization of the applications you are going to deploy to the cloud or the energy applications you have already deployed to the cloud, if you know how much energy your applications burn, keep your hands up. Okay we got one, anyone else just the one? Good. Okay, keep your hand up, we are not finished. Okay keep your hand up if you know the energy utilization of that application in the cloud. You do, is it a private cloud?

And they are giving you the energy utilization of that?

SAP’s 2010 Sustainability Report demo’d

I had a Skype chat recently with SAP’s Chief Sustainability Officer Peter Graf where he gave me a demo of their new 2010 Sustainability report.

With Peter’s permission, I recorded the demo for publication on YouTube. The video above is the result and the transcription is below.

Some highlights Peter mentioned include:

  1. Sustainability reporting has saved SAP €170 million (!),
  2. SAP are updating their Sustainability report quarterly and are embedding it more and more closely with their financial reporting and,
  3. SAP have deep social media embedding in their report

With this report, SAP have put clear blue water between themselves and any other sustainability report. SAP can still take it up another few notches (productising it, putting an api in front of it, publishing in xbrl, etc) but this is the kind of reporting everyone needs to be moving to, as a baseline. Kudos to SAP for once again setting the bar with this report.

Now here’s the transcription of the demo:

Tom Raftery: Hi, everyone. Welcome to GreekMonk TV. We are talking today to SAP’s Chief Sustainability Officer, Peter Graf, who is going to give us a quick demo of the new 2010 SAP Sustainability Report.

Peter Graf: So, this is SAP’s 2010 Sustainability Report, which people can find online at sapsustainabilityreport.com. The report lays out the three key areas of impact for SAP. In the first place, SAP wants to become a more sustainable company, so we are talking about our own sustainability performance. The second section of the report is about how SAP helps customers to run more profitably and sustainably, so that’s mostly a conversation about our applications and software solutions.

And then finally, there is a section on how people at SAP drive opportunity for others through IT. And then, certainly the last part, as always when we put our report on the line is that encouraging into action and dialog between us and those who come and visit the report. And we call that section Do Your Part and that describes how everyone can contribute.

Tom Raftery: Great. Can you show me some of the details of how SAP have done in the last year? How does it look onscreen, because it’s very different from any other sustainability report that’s out there?

Peter Graf: Exactly. So before we go there, the data that we talk about is all assured by KPMG, and there are two levels of assurance and yes, this report is A+ from GRI perspective. It’s got the best rating that you can get from GRI. It complies with a whole variety of standards, but most importantly, we have not only done limited assurance to our greenhouse gas numbers, we’ve actually gone for reasonable assurance, meaning the assurance company actually assures that this is really our footprint. And we do that because we believe in the future there will be much more scrutiny around how people are reporting greenhouse gas emissions.

And that’s what the greenhouse gas emissions look like. You can see the trend from 2000 to 2007; we’ve always increased our emissions. In 2007, we set ourselves the goal to reduce our emissions step-by-step back to the level of 2000 by the year 2020, so we have an absolute carbon target. That is pretty aggressive considering that in 2000, we had about 24,000 employees and already today in 2011, we have more than 50,000 employees and we want to obviously continue to grow as a company.

You can also see that we have kind of flipped the chart to kind of visually highlight that emissions are seen as a liability to SAP so they show below the line.

Tom Raftery: And clicking on any of those bars redraws the kind of pie chart on the right?

Peter Graf: Exactly, so you can go and drill into the different years and you can see how the emissions change. For example in 2008, we had 31% of our emissions from flights that also tells you that we include Scope 1, 2 and 3 emissions in our calculation.

That number dropped dramatically in 2009, given that in the times of economic crisis, we just don’t service as many customers, so you can see that here. And then in 2010, the number continues in absolute terms to be reduced, which is amazing given that we have actually increased our revenues by 17% in 2010 while reducing our emissions. You can see that very nicely when you look at the carbon emissions on a Euro basis. We are now at 33.9 grams per Euro revenue and in 2008, that number was 45.6 grams.

So, in terms of carbon efficiency we have dramatically accelerated and you can drill into different areas. For example, revenue in the Americas, you can actually go and look at different scopes and include or exclude them in the competition. So that’s the benefit of having this kind of interactivity.

Tom Raftery: The obvious question that comes to mind then is, if you are spending all this money on getting carbon out of your system, out of your organization, it must be costing the company a small fortune…

FaceBook open sources building an energy efficient data center

FaceBook's new custom-built Prineville Data Centre

(Photo credit FaceBook’s Chuck Goolsbee)

Back in 2006 I was the co-founder of a Data Centre in Cork called Cork Internet eXchange. We decided, when building it out, that we would design and build it as a hyper energy-efficient data centre. At the time, I was also heavily involved in social media, so I had the crazy idea, well, if we are building out this data centre to be extremely energy-efficient, why not open source it? So we did.

We used blogs, flickr and video to show everything from the arrival of the builders on-site to dig out the foundations, right through to the installation of customer kit and beyond. This was a first. As far as I know, no-one had done this before and to be honest, as far as I know, no-one since has replicated it. Until today.

Today, Facebook is lifting the lid on its new custom-built data centre in Prineville, Oregon.

Not only are they announcing the bringing online of their new data centre, but they are open sourcing its design, specifications and even telling people who their suppliers were, so anyone (with enough capital) can approach the same suppliers and replicate the data centre.

Facebook are calling this the OpenCompute project and they have released a fact sheet [PDF] with details on their new data center and server design.

I received a pre-briefing from Facebook yesterday where they explained the innovations which went into making their data centre so efficient and boy, have they gone to town on it.

Data centre infrastructure
On the data centre infrastructure side of things, building the facility in Prineville, Oregon (a high desert area of Oregon, 3,200 ft above sea level with mild temperatures) will mean they will be able to take advantage of a lot of free cooling. Where they can’t use free cooling, they will utilise evaporative cooling, to cool the air circulating in the data centre room. This means they won’t have any chillers on-site, which will be a significant saving in capital costs, in maintenance and in energy consumption. And in the winter, they plan to take the return warm air from the servers and use it to heat their offices!

By moving from centralised UPS plants to 48V localised UPS’s serving 6 racks (around 180 Facebook servers), Facebook were able to re-design the electricity supply system, doing away with some of the conversion processes and creating a unique 480V distribution system which provides 277V directly to each server, resulting in more efficient power usage. This system reduces power losses going in the utility to server chain, from an industry average 21-27% down to Prineville’s 7.5%.

Finally, Facebook have significantly increased the operating temperature of the data center to 80.6F (27C) – which is the upper limit of the ASHRAE standards. They also confided that in their next data centre, currently being constructed in North Carolina, they expect to run it at 85F – this will save enormously on the costs of cooling. And they claim that the reduction in the number of parts in the data center means they go from 99.999% uptime, to 99.9999% uptime.

New Server design…