IBM has announced the completion of the acquisition The Weather Company’s B2B, mobile and cloud-based web-properties, weather.com, Weather Underground, The Weather Company brand and WSI, its global business-to-business brand.
At first blush this may not seem like an obvious pairing, but the Weather Company’s products are not just their free apps for your smartphone, they have specialised products for the media industry, the insurance industry, energy and utilities, government, and even retail. All of these verticals would be traditional IBM customers.
Then when you factor in that the Weather Company’s cloud platform takes in over 100 Gbytes per day of information from 2.2 billion weather forecast locations and produces over 300 Gbytes of added products for its customers, it quickly becomes obvious that the Weather Company’s platform is highly optimised for Big Data, and the internet of Things.
This platform will now serve as a backbone for IBM’s Watson IoT.
Watson you will remember, is IBM’s natural language processing and machine learning platform which famously took on and beat two former champions on the quiz show Jeopardy. Since then, IBM have opened up APIs to Watson, to allow developers add cognitive computing features to their apps, and more recently IBM announced Watson IoT Cloud “to extend the power of cognitive computing to the billions of connected devices, sensors and systems that comprise the IoT”.
Given Watson’s relentless moves to cloud and IoT, this acquisition starts to make a lot of sense.
IBM further announced that it will use its network of cloud data centres to expand Weather.com into five new markets including China, India, Brazil, Mexico and Japan, “with the goal of increasing its global user base by hundreds of millions over the next three years”.
With Watson’s deep learning abilities, and all that weather data, one wonders if IBM will be in a position to help scientists researching climate change. At the very least it will help the rest of us be prepared for its consequences.
New developments in AI and deep learning are being announced virtually weekly now by Microsoft, Google and Facebook, amongst others. This is a space which it is safe to say, will completely transform how we interact with computers and data.
Against that backdrop, it is heartening to see some more enlightened cloud companies doing the right thing. Salesforce announced today its second renewable energy purchase agreement. The first announcement, made just last month was of the signing of a 12-year wind energy purchase agreement, for 40MW of a new West Virginia wind farm through a virtual power purchase agreement (VPPA). This wind farm is expected to generate 125,000MWh of wind energy annually.
Today’s news doubles down on that with the disclosure that Salesforce has signed a second energy agreement, this time with a 24MW new wind farm in Texas which is expected to generate 102,000MWh of electricity annually. When the two wind farms are fully up and running then, Salesforce will be buying 227,000MWh of electricity per annum.
To put this in context, according to its filings with the CDP Salesforce’s total purchase of energy (electricity, fuel, heat, steam, and cooling) in 2015, was just under 152,000MWh. So Salesforce’s energy consumption can grow quite a bit by the time these two wind farms come fully on line in December 2016, and still be well covered by the output of these two wind farms.
If we compare this to a couple of Salesforce’s competitors* –
Microsoft purchases 3,570,438MWh of energy, of which 3,240,620MWh comes from clean energy sources (90.8% clean), and
SAP purchases 918,320MWh of energy , of which 346,885MWh comes from clean energy sources (37.8% clean)
So barring any huge spikes in Salesforce’s energy requirements this year, it looks like they are on track to being the cleanest of the large cloud CRM providers.
In case you are interested in other cloud computing companies purchases of renewable energy, I charted a few of them based on their submissions to the CDP for 2015 – see below
*I tried to find energy and emissions data for Salesforce competitor Workday, but as yet they have not reported their data to the CDP. When they do, I will update this post.
“Oh boy, I just love buying inkjet cartridges”. Said no-one. Ever. They are expensive, they run out just when we need them most, or worse they clog, and there is never a recycling bin nearby when it is time to dispose of them.
So what’s the choice? Laser printers? Well, they are good for black and white printing, but they have a higher upfront cost, colour laser printers are very expensive, and they don’t give great results.
It took a few years, but Epson launched their EcoTank range of printers recently, and they are just that i.e. printers with refillable wells into which you pour the ink, as opposed to cartridges.
The printers ship with enough ink for about 4,000 pages of printing, after which you can buy refills, which are surprisingly inexpensive (around $60 on Amazon for another set of 4 bottles which yields another approx 4,000 pages).
The EcoTank printers cost a bit more initially than printers that take cartridges (they sell on Amazon for $399) but given the refills are inexpensive, that difference is quickly made up.
And they are far more convenient, because they need to be refilled far less often, and not using cartridges, there is no search for specialised recycling bins after topping up the ink.
The utilities industry has typically been change averse, and often for good reasons, but with the technological advances of the past few years, the low carbon imperative, and pressure from customers, utilities are going to have to figure out how to disrupt their business, or they will themselves be disrupted.
I gave the opening keynote at this year’s SAP for Utilities event in Huntington Beach on the topic of the Convergence of IoT and Energy (see the video above). Interestingly, with no coordination beforehand, all the main speakers referred to the turmoil coming to the utilities sector, and each independently referenced Tesla and Uber as examples of tumultuous changes happening in other industries.
What are the main challenges facing the utilities industry?
As noted here previously, due to the Swanson effect, the cost of solar is falling all the time, with no end in sight. The result of this will be more and more distributed generation being added to the grid, which utilities will have to manage, and added to that, the utilities will have reduced income from electricity sales, as more and more people generate their own.
On top of that, with the recent launch of their PowerWall product, Tesla ensured that in-home energy storage is set to become a thing.
Battery technology is advancing at a dizzying pace, and as a consequence:
1) the cost of lithium ion batteries is dropping constantly
and
2) the energy density of the batteries is increasing all the time
(Charts courtesy of Prof Maarten Steinbuch, Director Graduate Program Automotive Systems, Eindhoven University of Technology)
With battery prices falling, solar prices falling, and battery energy density increasing, there is a very real likelihood that many people will opt to go “off-grid” or drastically reduce their electricity needs.
How will utility companies deal with this?
There are many possibilities, but, as we have noted here previously, an increased focus on by utilities on energy services seems like an obvious one. This is especially true now, given the vast quantities of data that smart meters are providing utility companies, and the fact that the Internet of Things (IoT) is ensuring that a growing number of our devices are smart and connected.
Further, with the cost of (solar) generation falling, I can foresee a time when utility companies move to the landline model. You pay a set amount per month for the connection, and your electricity is free after that. Given that, it is all the more imperative that utility companies figure out how to disrupt their own business, if only to find alternative revenue streams to ensure their survival.
The cost of solar power is falling in direct relation to the amount of solar power modules being produced. With no end in sight to this price reduction, we should soon be in a world where energy is in abundance.
Moore’s Law, the law that says the number of transistors in computers doubles every two years approximately, has an equivalent in solar power called Swanson’s Law. Swanson’s Law states that the price of solar panels tends to drop 20% for every doubling of cumulative shipped volume. This leads to a positive feedback loop of lower prices meaning more solar pv installed, leading to lower prices, and so on. And consequently, as the price of solar panels drops (see top graph), the amount of installed solar globally has increased exponentially (see chart on right).
This law has held true since 1977, and according to the Economist
technological developments that have been proved in the laboratory but have not yet moved into the factory mean Swanson’s law still has many years to run
This positive feedback loop is manifesting itself in China, where in May of this year the National Development and Reform Commission announced that China would target a more than tripling of its installed solar capacity to 70 gigawatts (GW) by 2017.
To put those numbers in perspective, according to the International Energy Agency’s Snapshot of Global PV Markets 2014 report, the total amount of solar PV installed globally reached 177GW at the end of 2014.
And it is not just South East Asia, Brazil and the US this week reached a historic climate agreement that will require.
the US to triple its production of wind and solar power and other renewable energies. Brazil will need to double its production of clean energy. The figures do not include hydro power.
And according to GTM Research, by 2020 Europe will install 42GW and account for 31% of the global solar market.
To fully appreciate the significance of this price, it is necessary to understand that the price of natural gas – which generates 99% of the UAE’s electricity – stands at 9 cents. So, in the United Arab Emirates at least, solar power is currently 65% of the cost of the next cheapest form of electricity production. And its price will continue to decline for the foreseeable future.
So, solar power is cheap (in some cases 65% of the cost of the next nearest competitor), its price is continually dropping, and with 100’s of gigawatts of orders coming into the pipeline, the price reductions may even accelerate.
In this scenario we are headed into a world where solar power rates, for all intents and purposes, approach zero. In that situation, the question becomes, what do we do in a world where energy is in abundance?
Mobile industry consortium GreenTouch today released tools and technologies which, they claim, have the potential to reduce the energy consumption of communication networks by 98%
The world is now awash with mobile phones.
According to Ericsson’s June 2015 mobility report [PDF warning], the total number of mobile subscriptions globally in Q1 2015 was 7.2 billion. By 2020, that number is predicted to increase another 2 billion to 9.2 billion handsets.
Of those 7.2 billion subscriptions, around 40% are associated with smartphones, and this number is increasing daily. In fact, the report predicts that by 2016 the number of smartphone subscriptions will surpass those of basic phones, and smartphone numbers will reach 6.1 billion by 2020.
When you add to that the number of connected devices now on mobile networks (M2M, consumer electronics, laptops/tablets/wearables), we are looking at roughly 25 billion connected devices by 2020.
That’s a lot of data passing being moved around the networks. And, as you would expect that number is increasing at an enormous rate as well. There was a 55% growth in data traffic between Q1 2014 and Q1 2015, and there is expected to be a 10x growth in smartphone traffic between 2014 and 2020.
Fortunately five years ago an industry organisation called GreenTouch was created by Bell Labs and other stakeholders in the space, with the object of reducing mobile networking’s footprint. In fact, the goal of GreenTouch when it was created was to come up with technologies reduce the energy consumption of mobile networks 1,000x by 2015.
Today, June 18th in New York, they are announcing the results of their last five years work, and it is that they have come up with ways for mobile companies to reduce their consumption, not by the 1,000x that they were aiming for, but by 10,000x!
The consortium also announced
research that will enable significant improvements in other areas of communications networks, including core networks and fixed (wired) residential and enterprise networks. With these energy-efficiency improvements, the net energy consumption of communication networks could be reduced by 98%
And today GreenTouch also released two tools for organisations and stakeholders interested in creating more efficient networks, GWATT and Flexible Power Model.
They went on to announce some of the innovations which led to this potential huge reduction in mobile energy consumption:
·
Beyond Cellular Green Generation (BCG2) — This architecture uses densely deployed small cells with intelligent sleep modes and completely separates the signaling and data functions in a cellular network to dramatically improve energy efficiency over current LTE networks.
Large-Scale Antenna System (LSAS) — This system replaces today’s cellular macro base stations with a large number of physically smaller, low-power and individually-controlled antennas delivering many user-selective data beams intended to maximize the energy efficiency of the system, taking into account the RF transmit power and the power consumption required for internal electronics and signal processing.
Distributed Energy-Efficient Clouds – This architecture introduces a new analytic optimization framework to minimize the power consumption of content distribution networks (the delivery of video, photo, music and other larger files – which constitutes over 90% of the traffic on core networks) resulting in a new architecture of distributed “mini clouds” closer to the end users instead of large data centers.
Green Transmission Technologies (GTT) – This set of technologies focuses on the optimal tradeoff between spectral efficiency and energy efficiency in wireless networks, optimizing different technologies, such as single user and multi-user MIMO, coordinated multi-point transmissions and interference alignment, for energy efficiency.
Cascaded Bit Interleaving Passive Optical Networks (CBI-PON) – This advancement extends the previously announced Bit Interleaving Passive Optical Network (BiPON) technology to a Cascaded Bi-PON architecture that allows any network node in the access, edge and metro networks to efficiently process only the portion of the traffic that is relevant to that node, thereby significantly reducing the total power consumption across the entire network.
Now that these innovations are released, mobile operators hoping to reduce their energy costs will be looking closely to see how they can integrate these new tools/technologies into their network. For many, realistically, the first opportunity to architect them in will be with the rollout of the 5G networks post 2020.
Having met (and exceeded) its five year goal, what’s next for GreenTouch?
I asked this to GreenTouch Chairman Thierry Van Landegem on the phone earlier in the week. He replied that the organisation is now looking to set a new, bold goal. They are looking the energy efficiency of areas such as cloud, network virtualisation, and Internet of Things, and that they will likely announcement their next objective early next year.
We attended this year’s SapphireNow event (SAP’s customer and partner conference) in Orlando and were very impressed with some of the advances SAP and their ecosystem are making in the field of healthcare.
Why is this important?
Healthcare for many decades now has been stagnant when it comes to technological disruption. Go to most hospitals today and you will still see doctors using paper and clipboards for their patient notes. Don’t just take our word for it, in her highly anticipated 2015 Internet Trends report Mary Meeker clearly identified that the impact of the Internet on healthcare is far behind most other sectors.
But this is changing, and changing rapidly. The changes coming to the healthcare sector will be profound, and will happen faster than anyone is prepared for.
And one of the main catalysts of this change has been the collapse in the cost of gene sequencing in the last ten years. See that collapse charted in the graph to the right. And note that the y-axis showing the cost of sequencing is using a logarithmic scale. The costs of sequencing are falling far faster than the price of the processing power required to analyse the genetic data. This means the cost of sequencing is now more influenced by the cost of data analysis, than data collection. This has been a remarkable turn of events, especially given the first human genome was only published fourteen years ago, in 2001.
The advances in the data analytics picking up pace too. In memory databases, such as SAP’s HANA, and cognitive computing using devices like IBM’s Watson, are contributing enormously to this.
To get an idea just how much the analytics is advancing, watch the analysis of data from 100,000 patients by Prof Christof von Kalle, director of Heidelberg’s National Center for Tumor Diseases. Keep in mind that each of the 100,000 patients has 3bn base pairs in their genome, and he’s analysing them in realtime (Prof Von Kalle’s demo starts at 1:00:03 in the video, and lasts a little over 5 minutes).
As he says at the conclusion, two years ago a similar study conducted over several years by teams of scientists was published as a paper in the journal Nature. That’s an incredible rate of change.
IBM are also making huge advances in this field with their cognitive computing engine, Watson. In a recent announcement, IBM detailed how they have teamed up with fourteen North American cancer institutes to analyse the DNA of their patients to gain insights into the cancers involved, and to speed up the era of personalised medicine.
Personalised medicine is where a patient’s DNA is sequenced, as is the DNA of their tumour (in the case of cancer), and an individualised treatment, specific to the genotype of their cancer is designed and applied.
This differs from the precision medicine offerings being offered today by Molecular Health, and discussed by Dr Alexander Picker in the video at the top of this post.
Precision medicine is where existing treatments are analysed to see which is best equipped to tackle a patient’s tumour, given their genotype, and the genotype of their cancer. One thing I learned from talking to Dr Picker at Sapphirenow is that cancers used to be classified by their morphology (lung cancer, liver cancer, skin cancer, etc.) and treated accordingly. Now, cancers are starting to be classified according to their genotype, not their morphology, and tackling cancers this way is a far more effective form of therapy.
Finally, SAP and IBM are far from being alone in this space. Google, Microsoft and Apple are also starting to look seriously at this health.
With all this effort being pored into this personalised medicine, I think it is safe to say Ms. Meeker’s 2016 slide featuring health will look a little different.
Full disclosure – SAP paid my travel and accommodation to attend their Sapphirenow event
Against this background, it is impressive to see to see Equinix, a global provider of carrier-neutral data centers (with a fleet of over 100 data centers) and internet exchanges, announce a 1MW Bloom Energy biogas fuel cell project at its SV5 data center, in Silicon Valley. Biogas is methane gas captured from decomposing organic matter such as that from landfills or animal waste.
Why would Equinix do this?
Well, the first phase of California’s cap and trade program for CO2 emissions commenced in January 2013, and this could, in time lead to increased costs for electricity. Indeed in their 2014 SEC filing [PDF], Equinix note that:
The effect on the price we pay for electricity cannot yet be determined, but the increase could exceed 5% of our costs of electricity at our California locations. In 2015, a second phase of the program will begin, imposing allowance obligations upon suppliers of most forms of fossil fuels, which will increase the costs of our petroleum fuels used for transportation and emergency generators.
We do not anticipate that the climate change-related laws and regulations will force us to modify our operations to limit the emissions of GHG. We could, however, be directly subject to taxes, fees or costs, or could indirectly be required to reimburse electricity providers for such costs representing the GHG attributable to our electricity or fossil fuel consumption. These cost increases could materially increase our costs of operation or limit the availability of electricity or emergency generator fuels.
In light of this, self-generation using fuel cells looks very attractive, both from the point of view of energy cost stability, and reduced exposure to increasing carbon related costs.
On the other hand, according to today’s announcement, Equinix already gets approximately 30% of its electricity from renewable sources, and it plans to increase this to 100% “over time”.
Even better than that, Equinix is 100% renewably powered in Europe despite its growth. So Equinix is walking the walk in Europe, at least, and has a stated aim to go all the way to 100% renewable power.
What more could Equinix do?
Well, two things come to mind immediately:
Set an actual hard target date for the 100% from renewables and
Start reporting all emissions to the CDP (and the SEC)
Given how important a player Equinix in the global internet infrastructure, the sooner we see them hit their 100% target, the better for all.
After returning from IBM’s InterConnect conference recently we chided IBM for their aping of Amazon’s radical opaqueness concerning their cloud emissions, and their lack of innovation concerning renewables.
However, some better news emerged in the last few days.
The Whitehouse last week hosted a roundtable of some of the largest Federal suppliers to discuss their GHG reduction targets, or if they didn’t have any, to create and disclose them.
Coming out of that roundtable, IBM announced its committment to procure electricity from renewable sources for 20% of its annual electricity consumption by 2020. To do this, IBM will contract over 800 gigawatt-hours (GWh) per year of renewable electricity.
And IBM further committed to:
Reduce CO2 emissions associated with IBM’s energy consumption 35% by year-end 2020 against base year 2005 adjusted for acquisitions and divestitures.
To put this in context, in the energy conservation section of IBM’s 2013 corporate report, IBM reports that it sourced 17% of its electricity from renewable sources in 2013.
It is now committing to increase that from the 2013 figure of 17% to 20% by 2020. Hmmm.
IBM committed to purchasing 800 GWh’s of renewable electricity per year by 2020. How does that compare to some of its peers?
In 2014, the EPA reported that Intel purchased 3,102 GWh’s, of renewable electricity, and Microsoft purchased 2,488 GWh’s which, in both cases amounted to 100% of their total US electricity use.
In light of this, 800 GWh’s amounting to 20% of total electricity use looks a little under-ambitious.
On the other hand, at least IBM are doing something.
Amazon, as noted earlier, have steadfastly refused to do any reporting of their energy consumption, and their emissions. This may well be, at least in part, because Amazon doesn’t sell enough to the government to appear on the US Federal government’s Greenhouse Gas Management Scorecard for significant suppliers.
What is ResearchKit? Apple’s SVP of Operations, Jeff Williams, described it as a framework for medical researchers to create and deploy mobile apps which collect and share medical data from phone users (with their permission), and share it with the researchers.
Why is this important? Previously it has proven difficult for research organisations to secure volunteers for research studies, and the data collected from such studies is often collected, at best, quarterly.
With this program, Apple hopes to help researchers more easily attract volunteers, and collect their information far more frequently (up to once a second), yielding far richer data.
The platform itself launches next month, but already there are 5 apps available, targeting Parkinson’s, diabetes, heart disease, asthma, and breast cancer. These apps have been developed by medical research organisations, in conjunction with Apple.
The success of this approach can be seen already in this tweet:
After six hours we have 7406 people enrolled in our Parkinson’s study. Largest one ever before was 1700 people. #ResearchKit
I downloaded mPower, the app for Parkinson’s to try it out, but for now, they are only signing up people who are based in the US.
As well as capturing data for the researchers, mPower also presents valuable information to the user, tracking gait and tremor, and seeing if they improve over time, when combined with increased exercise. So the app is a win both for the research organisations, and for the users too.
Apple went to great pains to stress that the user is in complete control over who gets to see the data. And Apple themselves doesn’t ever get to see your data.
This is obviously a direct shot at Google, and its advertising platform’s need to see your data. Expect to hear this mantra repeated more and more by Apple in future launches.
This focus on privacy, along with Apple’s aggressive stance on fixing security holes, and defaulting to encryption on its devices, is becoming a clear differentiator between Apple and Android (and let’s face it, in mobile, this is a two horse race, for now).
Finally, Williams concluded the launch by saying Apple wants ResearchKit on as many devices as possible. Consequently, Apple are going to make ResearchKit open source. It remains to see which open source license they will opt for.
But, open sourcing ResearckKit is a very important step, as it lends transparency to the privacy and security which Apple say is built-in, as well as validating Apple’s claim that they don’t see your data.
And it also opens ResearchKit up to other mobile platforms to use (Android, Windows, Blackberry), vastly increasing the potential pool of participants for medical research.