If you have been following this blog, you’ll know I have been profiling Data Center efficiency companies over the last few weeks. This week I take a look at Sentilla.
I talked to Sentilla’s CTO and co-founder, Joe Polastre, the other day and Joe told me that Sentilla came out of Berkeley where they had been looking at data analytics problems around large, potentially incomplete or inaccurate, streaming datasets. The challenge was how to turn that into a complete picture of what’s going on so people could make business decisions.
Sentilla takes an industrial manufacturing approach to Data Centers – in manufacturing you have power going in one side, and products and (often) waste heat coming out the other. In the same way in data centers you have power going in one side and coming out the other side you have the product (compute cycles) and waste heat. To optimise your data center you need to get the maximum data/compute (product) output with the minimum power in and the least waste heat generated. Sentilla thinks of data centers, as data factories!
Unlike most of the data center people I have talked to, Sentilla don’t talk so much about energy savings. Instead they emphasise maximising performance – getting the most out of your existing data centers, your existing storage, your existing servers, your existing power supply. By far the greatest saving from deploying Sentilla, Joe claimed, is not from the energy savings. That pales in comparison to the capital deferment savings gained from being able to delay the building of extra data center facilities by however many years, he said.
So how does Sentilla help?
Well Sentilla analyses the energy profile of every asset in the data center, whether metered or not, and makes recommendations to improve the planning and management of data center operations…
The power consumption of desktop computers, which are often only used 8 hours a day, (and may need to be woken up once a month at 3am for an update) is relatively straightforward to manage. On the other hand, the power management of servers is quite a bit more complex. Servers are, by definition supposed to be accessible at all times, so you can’t shut them down, right?
Nightwatchman helps in a number of ways. First, its agent-based software quickly identifies servers whose CPU utilisation is is simply associated with its own management and maintenance processes (i.e. the server is unused). These servers can be decomissioned or repurposed…
Conference organising company iQuest contacted me last year to ask me to deliver a keynote presentation at their Green IT Summit.
The event took place in Dublin yesterday and my keynote talk entitled “Green IT – driving efficiency, sustainability and enabling efficient working practices” is above.
The organisers prudently decided that they didn’t want to take the risk of any of their international speakers not making it to the event because of the ashcloud. This would have left them with a hole in the schedule at the last minute so they contracted the services of OnlineMeetingRooms and three of the presenters were able to present to the audience in Dublin, over an online video connection, without having to travel!
The title I was asked to present on was quite broad and I had 30 minutes to try cover it all so I had to go at quite a clip but the feedback has been extremely positive so it seemed to work out very well.