Tag: energy efficiency

GreenTouch release tools and technologies to significantly reduce mobile networks energy consumption

Mobile Phone

Mobile industry consortium GreenTouch today released tools and technologies which, they claim, have the potential to reduce the energy consumption of communication networks by 98%

The world is now awash with mobile phones.

According to Ericsson’s June 2015 mobility report [PDF warning], the total number of mobile subscriptions globally in Q1 2015 was 7.2 billion. By 2020, that number is predicted to increase another 2 billion to 9.2 billion handsets.

Of those 7.2 billion subscriptions, around 40% are associated with smartphones, and this number is increasing daily. In fact, the report predicts that by 2016 the number of smartphone subscriptions will surpass those of basic phones, and smartphone numbers will reach 6.1 billion by 2020.

Number of connected devices

When you add to that the number of connected devices now on mobile networks (M2M, consumer electronics, laptops/tablets/wearables), we are looking at roughly 25 billion connected devices by 2020.

That’s a lot of data passing being moved around the networks. And, as you would expect that number is increasing at an enormous rate as well. There was a 55% growth in data traffic between Q1 2014 and Q1 2015, and there is expected to be a 10x growth in smartphone traffic between 2014 and 2020.

So how much energy is required to shunt this data to and fro? Estimates cite ICT as being responsible for the consumption of 2% of the world’s energy, and mobile networking making up roughly half of that. With the number of smartphones set to more than double globally between now and 2020, that figure too is shooting up.

Global power consumption by telecommunications networks

Fortunately five years ago an industry organisation called GreenTouch was created by Bell Labs and other stakeholders in the space, with the object of reducing mobile networking’s footprint. In fact, the goal of GreenTouch when it was created was to come up with technologies reduce the energy consumption of mobile networks 1,000x by 2015.

Today, June 18th in New York, they are announcing the results of their last five years work, and it is that they have come up with ways for mobile companies to reduce their consumption, not by the 1,000x that they were aiming for, but by 10,000x!

The consortium also announced

research that will enable significant improvements in other areas of communications networks, including core networks and fixed (wired) residential and enterprise networks. With these energy-efficiency improvements, the net energy consumption of communication networks could be reduced by 98%

And today GreenTouch also released two tools for organisations and stakeholders interested in creating more efficient networks, GWATT and Flexible Power Model.

They went on to announce some of the innovations which led to this potential huge reduction in mobile energy consumption:

·

  • Beyond Cellular Green Generation (BCG2) — This architecture uses densely deployed small cells with intelligent sleep modes and completely separates the signaling and data functions in a cellular network to dramatically improve energy efficiency over current LTE networks.
  • Large-Scale Antenna System (LSAS) — This system replaces today’s cellular macro base stations with a large number of physically smaller, low-power and individually-controlled antennas delivering many user-selective data beams intended to maximize the energy efficiency of the system, taking into account the RF transmit power and the power consumption required for internal electronics and signal processing.
  • Distributed Energy-Efficient Clouds – This architecture introduces a new analytic optimization framework to minimize the power consumption of content distribution networks (the delivery of video, photo, music and other larger files – which constitutes over 90% of the traffic on core networks) resulting in a new architecture of distributed “mini clouds” closer to the end users instead of large data centers.
  • Green Transmission Technologies (GTT) – This set of technologies focuses on the optimal tradeoff between spectral efficiency and energy efficiency in wireless networks, optimizing different technologies, such as single user and multi-user MIMO, coordinated multi-point transmissions and interference alignment, for energy efficiency.
  • Cascaded Bit Interleaving Passive Optical Networks (CBI-PON) – This advancement extends the previously announced Bit Interleaving Passive Optical Network (BiPON) technology to a Cascaded Bi-PON architecture that allows any network node in the access, edge and metro networks to efficiently process only the portion of the traffic that is relevant to that node, thereby significantly reducing the total power consumption across the entire network.

Now that these innovations are released, mobile operators hoping to reduce their energy costs will be looking closely to see how they can integrate these new tools/technologies into their network. For many, realistically, the first opportunity to architect them in will be with the rollout of the 5G networks post 2020.

Mobile phone mast

Having met (and exceeded) its five year goal, what’s next for GreenTouch?

I asked this to GreenTouch Chairman Thierry Van Landegem on the phone earlier in the week. He replied that the organisation is now looking to set a new, bold goal. They are looking the energy efficiency of areas such as cloud, network virtualisation, and Internet of Things, and that they will likely announcement their next objective early next year.

I can’t wait to see what they come up with next.


Mobile phone mast photo Pete S

Smartphone energy management – when will there be an app for that?

Mobile energy saving app?
I wrote a post last week about mobile endpoint management applications and their potential to extend smartphone battery life. It seems it was a prescient piece given the emergence this week of a study from Purdue University and Microsoft Research showing how energy is used by some smartphone applications [PDF].

The study indicates that many free, ad-supported applications expend most of their energy on serving the ads, as opposed to on the application itself. As an example, the core part of the free version of Angry Birds on Android uses only 18% of the total app energy. Most of the rest of the energy is used in gathering location, and handset details for upload to the ad server, downloading the ad, and the 3G tail.

This behaviour was similar in other free apps, such as Free Chess, NYTimes which were tested on Android and an energy bug found in Facebook causing the app to drain power even after termination, was confirmed fixed in the next version released (v1.3.1).

The researchers also performed this testing on Windows Mobile 6.5 but in the published paper, only the Android results are discussed.

Inmobi’s Terence Egan pushed back against some of the findings noting that

In one case, the researchers only looked at the first 33 seconds of usage when playing a chess game.

Naturally, at start up, an app will open communications to download an ad. Once the ad has been received, the app shouldn’t poll for another ad for some time.

Hver the time it take to play a game of chess (the computer usually beats me in 10 minutes) a few ad calls are dwarfed by the energy consumption of the screen, the speakers, and the haptic feedback…

HP joins ranks of microserver providers with Redstone

Redstone server platform
The machine in the photo above is HP’s newly announced Redstone server development platform.

Capable of fitting 288 servers into a 4U rack enclosure, it packs a lot of punch into a small space. The servers are System on a Chip based on Calxeda ARM processors but according to HP, future versions will include “Intel® Atom™-based processors as well as others”

These are not the kind of servers you deploy to host your blog and a couple of photos. No, these are the kinds of servers deployed by the literal shedload by hosting companies, or cloud companies to get the maximum performance for the minimum energy hit. This has very little to do with these companies developing a sudden green conscience, rather it is the rising energy costs of running server infrastructure that is the primary motivator here.

This announcement is part of a larger move by HP (called Project Moonshot), designed to advance HP’s position in the burgeoning low-energy server marketplace…

Carbon Disclosure Project’s emissions reduction claims for cloud computing are flawed

data center
The Carbon Disclosure Project (CDP) is a not-for-profit organisation which takes in greenhouse gas emissions, water use and climate change strategy data from thousands of organisations globally. This data is voluntarily disclosed by these organisations and is CDP’s lifeblood.

Yesterday the CDP launched a new study Cloud Computing – The IT Solution for the 21st Century a very interesting report which

delves into the advantages and potential barriers to cloud computing adoption and gives insights from the multi-national firms that were interviewed

The study, produced by Verdantix, looks great on the surface. They have talked to 11 global firms that have been using cloud computing for over two years and they have lots of data on the financial savings made possible by cloud computing. There is even reference to other advantages of cloud computing – reduced time to market, capex to opex, flexibility, automation, etc.

However, when the report starts to reference the carbon reductions potential of cloud computing it makes a fundamental error. One which is highlighted by CDP Executive Chair Paul Dickinson in the Foreword when he says

allowing companies to maximize performance, drive down costs, reduce inefficiency and minimize energy use – and therefore carbon emissions

[Emphasis added]

The mistake here is presuming a direct relationship between energy and carbon emissions. While this might seem like a logical assumption, it is not necessarily valid.

If I have a company whose energy retailer is selling me power generated primarily by nuclear or renewable sources for example, and I move my applications to a cloud provider whose power comes mostly from coal, then the move to cloud computing will increase, not decrease, my carbon emissions.

The report goes on to make some very aggressive claims about the carbon reduction potential of cloud computing. In the executive summary, it claims:

US businesses with annual revenues of more than $1 billion can cut CO2 emissions by 85.7 million metric tons annually by 2020

and

A typical food & beverage firm transitioning its human resources (HR) application from dedicated IT to a public cloud can reduce CO2 emissions by 30,000 metric tons over five years

But because these are founded on an invalid premise, the report could just as easily have claimed:

FaceBook open sources building an energy efficient data center

FaceBook's new custom-built Prineville Data Centre

(Photo credit FaceBook’s Chuck Goolsbee)

Back in 2006 I was the co-founder of a Data Centre in Cork called Cork Internet eXchange. We decided, when building it out, that we would design and build it as a hyper energy-efficient data centre. At the time, I was also heavily involved in social media, so I had the crazy idea, well, if we are building out this data centre to be extremely energy-efficient, why not open source it? So we did.

We used blogs, flickr and video to show everything from the arrival of the builders on-site to dig out the foundations, right through to the installation of customer kit and beyond. This was a first. As far as I know, no-one had done this before and to be honest, as far as I know, no-one since has replicated it. Until today.

Today, Facebook is lifting the lid on its new custom-built data centre in Prineville, Oregon.

Not only are they announcing the bringing online of their new data centre, but they are open sourcing its design, specifications and even telling people who their suppliers were, so anyone (with enough capital) can approach the same suppliers and replicate the data centre.

Facebook are calling this the OpenCompute project and they have released a fact sheet [PDF] with details on their new data center and server design.

I received a pre-briefing from Facebook yesterday where they explained the innovations which went into making their data centre so efficient and boy, have they gone to town on it.

Data centre infrastructure
On the data centre infrastructure side of things, building the facility in Prineville, Oregon (a high desert area of Oregon, 3,200 ft above sea level with mild temperatures) will mean they will be able to take advantage of a lot of free cooling. Where they can’t use free cooling, they will utilise evaporative cooling, to cool the air circulating in the data centre room. This means they won’t have any chillers on-site, which will be a significant saving in capital costs, in maintenance and in energy consumption. And in the winter, they plan to take the return warm air from the servers and use it to heat their offices!

By moving from centralised UPS plants to 48V localised UPS’s serving 6 racks (around 180 Facebook servers), Facebook were able to re-design the electricity supply system, doing away with some of the conversion processes and creating a unique 480V distribution system which provides 277V directly to each server, resulting in more efficient power usage. This system reduces power losses going in the utility to server chain, from an industry average 21-27% down to Prineville’s 7.5%.

Finally, Facebook have significantly increased the operating temperature of the data center to 80.6F (27C) – which is the upper limit of the ASHRAE standards. They also confided that in their next data centre, currently being constructed in North Carolina, they expect to run it at 85F – this will save enormously on the costs of cooling. And they claim that the reduction in the number of parts in the data center means they go from 99.999% uptime, to 99.9999% uptime.

New Server design…

SAP’s new NetWeaver 7.3 has a nice energy efficiency story

Green Code

 

If, like me, you have been using technology for a while now, you will be used to the constant release-by-release bloating of software. The first time I installed Excel it was version 2.2 and at the time it fit comfortable on a 1.4mb 3.5″ floppy disk – remember floppy disks?

In a pleasant bucking of that trend, SAP’s Holger Faulhaber told me in a recent call, that the latest version of their NetWeaver platform (v. 7.3) is a much leaner beast! While it is unlikely to fit on a floppy disk, it does have some significant performance wins, along with the simplified architecture and the functionality improvements you’d expect from an upgrade.

One of the main reasons for the improvements are SAP’s dropping of its two-stack approach in favour of a single Java stack for NetWeaver 7.3. This significantly reduces the amount of hardware which needs to be deployed and also because messages only need to be stored once, compared to 3-7 times previously, you get even more energy savings.

To demonstrate this, SAP carried out testing of their new NetWeaver Platform using the SAP Application Performance Standard (SAPS).

SAP defined a medium-sized landscape as being 37,500 SAPS for a NetWeaver Process Integration (PI) customer. Based on that they found that the savings potential for PI is in the region of:
– 60% less energy consumption or around 18,000 kWh/yr
– 60 tons of CO2 savings per landscape/year and
– €6.5k saving potential per landscape/year

The numbers for NetWeaver Portal 7.3 are for an SAP defined medium sized landscape of 30,000 SAPS. In that case, you see a savings potential of:
– 30% less energy consumption, 13,000 kWh/yr
– 6.5 tons od CO2 savings per landscape/yr and
– €2.6k saving potential/yr

While for NetWeaver Business Process Management 7.3 the potential savings for a 30,000 SAPS medium sized customer are:
– 57% less energy consumption, 24,000kWh/yr
– 12 tons of CO2 savings per landscape/yr and
– €6k saving potential per landscape/yr

The new software was tested on identical hardware to the previous version to rule out any efficiency gains from improved hardware according to Holger. He went on to mention that SAPs Quick Sizer [PDF] tool to help customers design their SAP landscape – has been updated for 7.3 so you don’t overspec your SAP installation.

“One of the main learning for SAP from this exercise is that I/O is expensive. For current tasks, it is nearly 3 times more expensive than utilising the CPU”, according to Holger. “Because of this we are now telling developers not write to locks, don’t write to the db – dropping locks increases performance” he continued. It will be interesting to see if SAP can move developers towards writing more energy efficient code.

This is a big potential win for SAP customers. Now they can gain significant performance and energy gains with a simple software upgrade, as opposed to having to buy any new hardware.It’d be great to see more companies adopting this type of approach to software development.

You should follow me on Twitter here

Photo credit Marjan Krebelj

by-sa

Friday Green Numbers round-up for Feb 18th 2011

Green Numbers
And here is a round-up of this week’s Green numbers…

  1. How La Poste Saves $7 Million a Year In IT Energy Costs

    France’s La Poste manages 180,000 PCs that sat mostly idle, yet still used as much electricity as if they were fully engaged in a difficult computing problem.

    “The AVOB solution does what other solutions do by automatically putting the PC into low energy mode when inactive after a specified amount of time,” Charpentier explained. “That saved La Poste 50 percent on average. What AVOB does differently is to also automatically adapt power consumption of the PC depending on the task to save an additional 10 to 20 percent on the …

  2. Joule on Pace to Produce Solar Fuels at Productivities Far Exceeding Those of All Known Biofuel Processes

    Joule Unlimited, pioneer of Liquid Fuel from the Sun™, today supported the high-productivity potential of its production process with the publication of a detailed analysis and model of its breakthrough solar-to-fuels platform.

    Published by Photosynthesis Research, the peer-reviewed article examines Joule’s critical advances in solar capture and conversion, direct product synthesis and continuous product secretion, which collectively form a platform for renewable fuel and chemical production with yields up to 50X greater than the maximum potential of any process requiring biomass. In addition, the analysis counters prior assumptions about …

  3. Waste Management Announces Fourth Quarter and Full Year 2010 Earnings

    Waste Management, announced financial results for its fourth quarter and for the year ended December 31, 2010. Revenues for the fourth quarter of 2010 were $3.19 billion compared with $3.01 billion for the same 2009 period. Net income for the quarter was $281 million, or $0.59 per diluted share, compared with $315 million, or $0.64 per diluted share, for the fourth quarter of 2009. The Company noted several items that impacted results in the 2010 and 2009 fourth quarters. Excluding these items, net income would have been $287 million, or $0.60 per diluted share, in the fourth quarter of 2010 compared with $257 million, or $0.52 per diluted share, in the fourth quarter of 2009, an increase in earnings per diluted share of over 15%.

    For the full year 2010, the Company reported revenues of $12.52 billion compared with $11.79 billion for 2009. Earnings per diluted share were $1.98 for the full year 2010 compared with …

  4. Exxon, Shell Both Essentially Admit Peak Oil Is Upon Us – Or Will Be Soon

    Two today on peak oil and how the big oil companies are finally publicly (if quietly) coming around to what peak oil researchers have been saying for a while: It’s here, or will be shortly.

    First, Wall Street Journal highlights how ExxonMobil is having a hard time finding new oil and has had a hard time for a while now. For the past 10 years for every 100 barrels it’s extracted it’s only been able to find 95 more. Natural gas exploration on the other hand has been very successful–enter, fracking.

    Second, Raw Story sums up a report by Shell that at best …

  5. Climate change doubled likelihood of devastating UK floods of 2000

Sentilla thinks of data centers, as data factories!

Data center
If you have been following this blog, you’ll know I have been profiling Data Center efficiency companies over the last few weeks. This week I take a look at Sentilla.

I talked to Sentilla’s CTO and co-founder, Joe Polastre, the other day and Joe told me that Sentilla came out of Berkeley where they had been looking at data analytics problems around large, potentially incomplete or inaccurate, streaming datasets. The challenge was how to turn that into a complete picture of what’s going on so people could make business decisions.

Sentilla takes an industrial manufacturing approach to Data Centers – in manufacturing you have power going in one side, and products and (often) waste heat coming out the other. In the same way in data centers you have power going in one side and coming out the other side you have the product (compute cycles) and waste heat. To optimise your data center you need to get the maximum data/compute (product) output with the minimum power in and the least waste heat generated. Sentilla thinks of data centers, as data factories!

Unlike most of the data center people I have talked to, Sentilla don’t talk so much about energy savings. Instead they emphasise maximising performance – getting the most out of your existing data centers, your existing storage, your existing servers, your existing power supply. By far the greatest saving from deploying Sentilla, Joe claimed, is not from the energy savings. That pales in comparison to the capital deferment savings gained from being able to delay the building of extra data center facilities by however many years, he said.

So how does Sentilla help?

Well Sentilla analyses the energy profile of every asset in the data center, whether metered or not, and makes recommendations to improve the planning and management of data center operations…

Power Assure automates the reduction of data center power consumption

Data centre
If you’ve been following this blog in the last couple of weeks you’ll have noticed that I have profiled a couple of data centre energy management companies – well, today it is the turn of Power Assure.

The last time I talked to Power Assure was two years ago and they were still very early stage. At that time I talked to co-founder and CTO, Clemens Pfeiffer, this time I spoke with Power Assure’s President and CEO, Brad Wurtz.

The spin that Power Assure put on their energy management software is that, not only do they offer their Dynamic Power Management solution which provides realtime monitoring and analytics of power consumption across multiple sites, but their Dynamic Power Optimization application automatically reduces power consumption.

How does it do that?

Well, according to Brad, clients put an appliance in each of the data centres they are interested in optimising (Power Assure’s target customer base are large organisations with multiple data centres – government, financial services, healthcare, insurance, telco’s, etc.). The appliance uses the management network to gather data – data may come from devices (servers, PDU’s, UPS’s, chillers, etc.) directly, or more frequently, it gathers data directly from multiple existing databases (i.e. a Tivoli db, a BMS, an existing power monitoring system, and/or inventory system) and performs Data Centre analytics on those data.

The optimisation module links into existing system management software to measures and track energy demand on a per applications basis in realtime. It then calculates the amount of compute capacity required to meet the service level agreements of that application and adds a little bit of headroom…

Friday Green Numbers round-up for Feb 4th 2011

Green Numbers
And here is a round-up of this week’s Green numbers…

  1. Europe’s Energy

    Member States of the European Union have agreed on targets aimed at reducing greenhouse gas emissions by cutting energy consumption by 20% and increasing the share of renewables in the energy mix to 20% by 2020. The ‘Europe’s Energy’ project gives users a set of visual tools to put these targets into context and to understand and compare how progress is being made towards them in different countries.

  2. Survey results: Utilities executives on Energy Efficiency and the Smart Grid

    The survey asked 106 utility executives – the people that arguably know more about the energy supply and demand challenges our nation faces than anyone else – a range of questions on the smart grid, energy efficiency and related topics and issues.

    We issued a press release today with some of the highlights, but to help put this week’s news into context, we also wanted to share a full breakdown of the results. Nothing earth shattering, but worth keeping in mind as the week progresses…

  3. 10 Smart Grid Trends from Distributech

    The annual smart grid event Distributech kicked off in San Diego Tuesday morning and — as expected — unleashed a whole series of news from smart grid-focused firms. From new home energy management products, to plug-in car software, to distribution automation gear, this is a list of trends and news from the show.

  4. US Venture Capital Investment in Cleantech Grows to Nearly $4 Billion in 2010, an 8% Increase From 2009

    US venture capital (VC) investment in cleantech companies increased by 8% to $3.98 billion in 2010 from $3.7 billion in 2009 and deal total increased by 7% to 278, according to an Ernst & Young LLP analysis based on data from Dow Jones VentureSource. VC investment in cleantech in Q4 2010 reached $979 million with 72 financing rounds. VC investment in cleantech in Q4 2010 reached $979 million with 72 financing rounds, flat in terms of deals and down 14% in terms of capital invested compared to Q4 2009…