Over the last couple years, I’ve asked many CIO’s about the efficiency of their datacenters. Most did answer me it was somewhere between 10 and 15%. In other words, on average, servers are performing valuable work two to three hours a day on average. There are many good reasons why that’s the case. Virtualization can improve this. A McKinsey study, titled “Clearing the air on cloud computing” states that, through aggressive virtualization, efficiencies of around 30% can be reached. We have just doubled the efficiency of our servers, great. But at the same time, they are using floor space, energy, management attention, etc. to do nothing 70% of the time.
Can we do something to improve this further? The reason we are limited to 30% efficiency is due to the fact we are aiming to address 100% of our needs with our own datacenter and that needs vary within the course of the day, the month, the year. There are month and year-end jobs, marketing campaigns, seasonality needs etc. But do we really need to deliver 100% of our needs ourselves?
The strategic service broker
In part 3, we discussed the vision of the CIO as the strategic service broker. And we identified that some services no longer need to be delivered by the internal IT assets, but can be sourced from service providers. For those, we don’t really care about server efficiency as we pay per use. In subsequent parts we reviewed the criteria for defining which services can be sourced that way.
We also identified the services that have to be delivered by the IT department themselves. And then, we have the gray area, services that could move outside but that we tend to keep inside due to the proximity of the data. Could I reduce the amount of servers I’m using to something like 70 or 80% of my maximum capacity?
And then there is cloud bursting
What happens then if the user chooses an internal service, but no resources are available to run the services on the enterprise cloud? Obviously, there are other clouds around, couldn’t we just ship the service over to that other cloud and run it from there? Actually that’s the whole concept of cloud bursting.
Cloud bursting is defined as an application deployment model in which an application runs in a private cloud or data center and bursts into a public cloud when the demand for computing capacity spikes. The advantage of such a hybrid cloud deployment is that an organization only pays for extra compute resources when they are needed.
Sounds easy isn’t it? Unfortunately reality is a little more complex. Let me try describing what needs to be taken into account.
There are a number of aspects to look into. Is the service allowed to run outside the walls of the enterprise? If yes, we could provision one or a number of virtual machines on an external cloud, set-up the service, transfer the data and run it from there.
But, that’s actually easier said than done. How much data access does the service actually need, where is that data at the present time, and how sensitive is it? If the amount of data is small and not very sensitive, it makes sense to transfer the data and run it from there. I have been discussing how to integrate data between clouds previously. The approaches are perfectly usable in the case of bursting.
What if the application cannot be run outside the boundaries of the enterprise? Well, here is where we really get an issue. Fundamentally there are two ways to approach this:
- Are there one or a couple services that have lower priorities? You know some of these background tasks that can be halted temporarily to make servers available for the task. That’s the easy way of addressing the problem; however it may make the users of those background tasks rather unhappy as their applications are delayed.
- Alternatively you migrate another service to the external cloud, one that is allowed to run externally, making space to run the core service internally.
This latter scenario requires an explanation. Ideally, we should take the virtual machine and its associated data, push it to the external cloud and run it there. But that would mean that the internally set-up virtual machine runs in the external cloud environment. In other words, that those two clouds are compatible. That’s not always the case at the moment. Many private clouds run VMWare, while public clouds typically use KVM, Xen, Hyper-V or another hypervisor, making compatibility difficult. Standards are currently being developed, but no standards have been agreed upon yet. Today, the chances of this to work are extremely limited.
The other approach is to stop the current service at a predefined point, to provision the same service on the external cloud, to transfer the data and restart it. Such approaches are used in redundant systems, and are called “cold standby.” There will be a delay in the execution of the service that is transferred, but place is made for the initial service to be provisioned behind the firewall.
As you can see, the term “bursting” covers multiple concepts. Although the term is often used as a key approach in cloud, it is not that easy to implement. But it helps improve the efficiency of the datacenter and reduce cost.
When debating where to put your applications, you may want to keep some applications with bursting capabilities in the private cloud, particularly when the workload of your core services varies drastically over the course of the day, the week, the month or the year.