Tag: Open Source

Apple launches ResearchKit – secure, private, open source medical research

ResearchKit

Apple announced a new initiative at its Spring Forward event yesterday – ResearchKit.

What is ResearchKit? Apple’s SVP of Operations, Jeff Williams, described it as a framework for medical researchers to create and deploy mobile apps which collect and share medical data from phone users (with their permission), and share it with the researchers.

Why is this important? Previously it has proven difficult for research organisations to secure volunteers for research studies, and the data collected from such studies is often collected, at best, quarterly.

With this program, Apple hopes to help researchers more easily attract volunteers, and collect their information far more frequently (up to once a second), yielding far richer data.

The platform itself launches next month, but already there are 5 apps available, targeting Parkinson’s, diabetes, heart disease, asthma, and breast cancer. These apps have been developed by medical research organisations, in conjunction with Apple.

The success of this approach can be seen already in this tweet:

After six hours we have 7406 people enrolled in our Parkinson’s study. Largest one ever before was 1700 people. #ResearchKit

— John Wilbanks (@wilbanks) March 10, 2015

I downloaded mPower, the app for Parkinson’s to try it out, but for now, they are only signing up people who are based in the US.

As well as capturing data for the researchers, mPower also presents valuable information to the user, tracking gait and tremor, and seeing if they improve over time, when combined with increased exercise. So the app is a win both for the research organisations, and for the users too.

Apple Does Not See Your Data

Apple went to great pains to stress that the user is in complete control over who gets to see the data. And Apple themselves doesn’t ever get to see your data.

This is obviously a direct shot at Google, and its advertising platform’s need to see your data. Expect to hear this mantra repeated more and more by Apple in future launches.

This focus on privacy, along with Apple’s aggressive stance on fixing security holes, and defaulting to encryption on its devices, is becoming a clear differentiator between Apple and Android (and let’s face it, in mobile, this is a two horse race, for now).

ResearchKit Open Source

Finally, Williams concluded the launch by saying Apple wants ResearchKit on as many devices as possible. Consequently, Apple are going to make ResearchKit open source. It remains to see which open source license they will opt for.

But, open sourcing ResearckKit is a very important step, as it lends transparency to the privacy and security which Apple say is built-in, as well as validating Apple’s claim that they don’t see your data.

And it also opens ResearchKit up to other mobile platforms to use (Android, Windows, Blackberry), vastly increasing the potential pool of participants for medical research.

We have documented on GreenMonk numerous times how Big Data, and analysis tools are revolutionising health care.

Now we are seeing mobile getting in on the action too. And how.

Technology for Good – episode thirty four with Salesforce’s John Tascheck

Welcome to episode thirty four of the Technology for Good hangout. In this week’s episode our guest was SalesForce SVP of Strategy, John Taschek. John and I are both longtime members of the Enterprise Irregulars, but this was the first time John and I had had a conversation outside of email!

Some of the more fascinating stories we looked at on the show, included a very successful Kickstarter campaign for a small router which can completely anonymise your internet activity, Lockheed Martin announcing that they’ve made a breakthrough on nuclear fusion technology, and Satya Nadella’s response to his gaffe last week about women seeking a raise.

Here is the full list of stories that we covered in this week’s show:

 

Climate

Energy

Hardware

Internet of Things

Wearables

Mobility

Comms

Privacy

Open Source

Sustainability

(Cross-posted @ GreenMonk: the blog)

Use open source platforms to find cloud computing’s energy and emissions footprint

 

Dials
Regular GreenMonk readers will be very aware that I am deeply skeptical about claims that Cloud Computing is Green (or even energy efficient). And that I talk about the significant carbon, water and biodiversity effects cloud computing can have.

One of the biggest issues with any claims of Cloud Computing being energy efficient, or Green, is the lack of transparency from the Cloud Computing providers. None of them are publishing any data around the energy consumption, or emissions of their Cloud infrastructure. Without data to back them up, any claims of Cloud computing being efficient are worthless.

Last week, while at the RackSpace EMEA Analyst day, we were given a potted history of OpenStack, RackSpace’s Cloud Computing platform. OpenStack was jointly developed by NASA and RackSpace and they open-sourced it with an Apache License in July 2010.

Anyone can download OpenStack and use it to create and host Cloud Computing solutions. Prominent OpenStack users include NASA, RackSpace (not surprisingly), AT&T, Deutsche Telecom, HP and IBM.

What has this got to do with Cloud Computing and energy efficiency I hear you ask?

Well, it occurred to me, during the analyst day, that because OpenStack is open source, anyone can fork it and write a version with built-in energy and emissions reporting. What would be really cool is, if this functionality, having been written, became a part of the core distribution – then anyone deploying OpenStack, would have this functionality by default…

FaceBook open sources building an energy efficient data center

FaceBook's new custom-built Prineville Data Centre

(Photo credit FaceBook’s Chuck Goolsbee)

Back in 2006 I was the co-founder of a Data Centre in Cork called Cork Internet eXchange. We decided, when building it out, that we would design and build it as a hyper energy-efficient data centre. At the time, I was also heavily involved in social media, so I had the crazy idea, well, if we are building out this data centre to be extremely energy-efficient, why not open source it? So we did.

We used blogs, flickr and video to show everything from the arrival of the builders on-site to dig out the foundations, right through to the installation of customer kit and beyond. This was a first. As far as I know, no-one had done this before and to be honest, as far as I know, no-one since has replicated it. Until today.

Today, Facebook is lifting the lid on its new custom-built data centre in Prineville, Oregon.

Not only are they announcing the bringing online of their new data centre, but they are open sourcing its design, specifications and even telling people who their suppliers were, so anyone (with enough capital) can approach the same suppliers and replicate the data centre.

Facebook are calling this the OpenCompute project and they have released a fact sheet [PDF] with details on their new data center and server design.

I received a pre-briefing from Facebook yesterday where they explained the innovations which went into making their data centre so efficient and boy, have they gone to town on it.

Data centre infrastructure
On the data centre infrastructure side of things, building the facility in Prineville, Oregon (a high desert area of Oregon, 3,200 ft above sea level with mild temperatures) will mean they will be able to take advantage of a lot of free cooling. Where they can’t use free cooling, they will utilise evaporative cooling, to cool the air circulating in the data centre room. This means they won’t have any chillers on-site, which will be a significant saving in capital costs, in maintenance and in energy consumption. And in the winter, they plan to take the return warm air from the servers and use it to heat their offices!

By moving from centralised UPS plants to 48V localised UPS’s serving 6 racks (around 180 Facebook servers), Facebook were able to re-design the electricity supply system, doing away with some of the conversion processes and creating a unique 480V distribution system which provides 277V directly to each server, resulting in more efficient power usage. This system reduces power losses going in the utility to server chain, from an industry average 21-27% down to Prineville’s 7.5%.

Finally, Facebook have significantly increased the operating temperature of the data center to 80.6F (27C) – which is the upper limit of the ASHRAE standards. They also confided that in their next data centre, currently being constructed in North Carolina, they expect to run it at 85F – this will save enormously on the costs of cooling. And they claim that the reduction in the number of parts in the data center means they go from 99.999% uptime, to 99.9999% uptime.

New Server design…

On Open Data, Open Source, UK Libel Law and Evidence-based Sustainability

“When the facts change, I change my mind.  What do you do, sir?” – John Maynard Keynes

As is often the case, someone asks for a written answer to a question, but then fails to use the material. The great thing about blogs is that they make it very easy to make sure such content isn’t wasted. So here are some thoughts on the GreenMonk  mission and sustainability more broadly.

We set up Greenmonk with the explicit intention of lobbying for open data and open source for better environmental outcomes.

Too much of science today has been privatised, or else unhelpfully politicised. Private sector companies hide evidence that doesn’t suit their goals. The right to kill a piece of research is quite common. It happens in the industry analyst sector too- some IT vendors demand the right to kill research they disagree with before signing a contract with a firm. Needless to say RedMonk doesn’t sign up to these contracts.

As we have seen in East Anglia however climate scientists can also massage figures to suit models they’re putting forward. Its not just private firms that have an agenda.

But science should be about open shared peer review of the data, and associated theories. Without open, uncensored science we can’t solve the world’s pressing environmental problems.

In the UK, libel law is regularly abused to shut down dissenting voices. Its not just randy footballers that try and abuse the law. Pushing back against the status quo are organisations such as Sense About Science, which is backing the National Petition for Libel Law Reform.

I just want to make it clear – science needs to be open to peer review, whether privately or publicly funded. Organisations such as the IPCC needs to be about science first, and lobbying second.

So that’s science and open data. What about sharing the source code?

Makers and doers are often “hackers” working on shared problems with shared tools. Homecamp, for example, is a group of people working on home automation to reduce home energy footprints- some of the people in the community work in sustainability related firms such as CurrentCost and pachube but by no means all of them. We need to hack to experiment with this stuff- before it can be packaged and rolled out to the mainstream.

Tim O’Reilly talks about Alpha Geeks as leading edge indicators of the future. Generally they are open source oriented, because they like to get their hands dirty and make things.

The data needs to be open, the source code needs to be open, the barriers to entry need to be lowered – if we are to build a low emissions, low pollution future. Real science is an architecture of participation.

disclosure: Thomas Dolby cover link courtesy of wikipedia.

by-sa

Is Skype bucking the Open trend?

This open stuff is really taking off!

Google announced Android, their open source mobile phone platform, OpenID 2.0 has been launched and even AT&T are announcing that they are Opening their networks!

Against that backdrop I was surprised to hear today that Skype have decided to eviscerate their Skype Developer Program (SDP). The SDP is responsible for Skype’s APIs.

Paul Amery, the director, Lester Madden, Product manager, Romain Bertrand and others from marketing were all reportedly axed today. In one fell swoop Skype appears to have culled half of the developer program.

This would appear to be related to the Niklas’ departure. The new management obviously want to send out a message to developers that “We are not interested in Open dev”

Obviously Skype know something about the folly of building extensible platforms that eludes the rest of us!

UPDATE: – I see Andy Abramson has picked up on this story too.

Microsoft will Open Source Windows (or die!)

I have said on a number of occasions that Microsoft should Open Source their Windows Operating System (and their Internet Explorer).

However, it bears repeating.

I realise it is unlikely to happen in the near term but, I firmly believe it will happen in the not-too-distant future (when Microsoft realises that they can’t compete with Open Source).

If you take it simply from a numbers perspective, Microsoft has 70,000 employees. If we say 40,000 are actively programming code for Microsoft (the rest being admin, management, marketing, etc.) then you are looking at a maximum of 10,000 who would have contributed to the development of Vista, Microsoft’s current Windows incarnation. I suspect the number is lower.

Vista is estimated to have cost Microsoft $10 billion and six years to develop and they still shipped a fairly shoddy product.

Presumably Microsoft will want to re-coup that investment before it even thinks about Open Sourcing Windows.

Compare that with the various Linux distros. It is estimated that around 100,000 people have contributed to Linux’ development! I recently installed Ubuntu on my laptop and it simply blows Vista away in terms of performance and reliability.

Why are Ubuntu and the other Linux distros so good?
Lots of reasons but a few jump out:

  1. With open source development, you are getting the “Wisdom of Crowds” – the more people involved in the development, the better the end-result
  2. Open-source development is peer reviewed so bugs are caught earlier in the process and any which make it into a release are fixed quickly
  3. In open source projects the code is written by people who self-select for jobs they have an interest/skillset in
  4. Feel free to add more in the comments!

The upsides for Microsoft of open sourcing Windows are myriad, for example:

  1. If/when Microsoft open source Windows, their Windows piracy concerns will suddenly disappear
  2. Microsoft drastically improves its reputation as an anti-competitive bullying monopolist
  3. The next operating system they write would cost a fraction of the $10bn spent on Vista and would be much higher quality

The economics of Open Source are counter-intuitive. IBM spends around $100m a year on Linux development. If the entire Linux community puts in $1 billion worth of effort and even half of that is useful to IBM’s customers, then IBM gets $500m of development for $100m worth of expenditure.

If Microsoft could, in one fell swoop, get rid of their Windows piracy concerns, write better quality software, improve their corporate image, and radically reduce their development costs, do you think they would do it?

Are IBM, Google and Sun ganging up on Microsoft?

I see IBM are now jumping into the free Office software arena by launching IBM Lotus Symphony.

IBM Lotus Symphony is a free download from the IBM site (registration required).

Up until now, Microsoft’s competition in this space has come from OpenOffice and Google – neither of whom have a strong track record in the Enterprise Office space! The entry of IBM into this space is game changing.

As well as making Symphony free for download, IBM are also committing 35 developers to the OpenOffice development project. Again conferring the the IBM seal of approval on OpenOffice suddenly marks it up for serious consideration by larger companies.

Seen in light of these recent announcements, Microsoft’s recent move to capture the student market for Office begins to have an air of desperation about it!

Ubuntu first impressions

Using Wubi, I installed Ubuntu onto my Vaio laptop over the weekend (Ubuntu is a Linux distro – an open source operating system).

Apart from some nervousness on my part about losing any info from my Windows partition, the install was completely painless.

Ubuntu Screenshot

The interface is really slick – it is obvious that lots of time and thought went into the look and feel of this OS.

It is also incredibly fast (despite being installed into a single file in the Windows partition as opposed to a normal install). From a standing start to being able to open a web page Vista took four minutes thirty seconds on this machine. Ubuntu took one minute fifty seconds on the same machine.

I’m trying out Evolution now (email client) and I will start trying other apps as well to see how they compare. For now though, I am impressed.

Backup software for Vista?

I want to install a copy of Ubuntu on my laptop.

However, when Vista was installed on it, a single partition was made of the hard drive so if I try to install Ubuntu now, it will overwrite the Vista partition (I assume, anyone knowing better, feel free to jump in!).

I presume that what I need to do is backup my Vista install, partition the drive into one partition for Vista and one for Ubuntu, restore the Vista into its partition and install Ubuntu into its partition.

Can anyone recommend software to allow me to backup my Vista install (including all my installed apps and settings), so that I can restore it again later.

In case it is relevant, I don’t have a floppy drive for the laptop.

Update – since posting this I came across Wubi – an Ubuntu installer which installs Ubuntu into a Windows partition. This could be an easier solution. I’ll try that and see how I get on.