In my last blog entry, I discussed some of the points Nicholas Carr made at a conference in Warsaw. I finished my conclusions by asking the question whether the provisioning of cloud infrastructure services would be centralized in a small number of extremely large providers or whether a hybrid model would be established.
Let me try to explain what I mean by a hybrid model. Three main questions lay at the center of this approach:
- Who can deliver cloud services the cheapest? Is it central IT departments of large enterprises, or public cloud service providers? Despite everything that has been published, the answer to that question is not easy. On the one end, service providers argue about the economies of scale they have, and that definitely speaks for them. But on the other, enterprises argue the need for service providers to make a profit and the cost of bandwidth. It’s true that cloud technologies are available to enterprises and service providers alike. It’s also possible for large, global enterprises to obtain efficiencies that are close to the ones obtained by service providers. They can position their datacentres in similar locations and have access to cheap energy (in some situations probably even cheaper due to the overall volume of their energy consumption).
- Are enterprises prepared to fully rely on service providers? Frankly, for many of the ones I have been talking to lately this is not the case today. But I have seen companies outsourcing their manufacturing, their logistics and many other critical business processes, so why not IT? Once concerns around security, compliance, service levels etc. have been addressed, once trust has been established, there is no reason IT would not go the same way. So, here the answer should probably be yes in the long run, at least for the vast majority of enterprises.
- Can enterprises mitigate risk? What if the service provider fails? Two weeks ago, I was in Palo Alto and could not get access to the internet. Indeed, Palo Alto was cut from the www for several hours. Enterprises should build contingency plans to mitigate such risk. They can do that by not putting all their eggs in the same basket, by using multiple service providers for example. But this should be invisible to the end-users, so services should be aggregated on a single platform.
One possible approach to this would be a hybrid model, one where enterprises would keep a datacentre, potentially outsourced. That datacentre would address the day to day needs of the enterprise using virtualization and automation, drastically increasing the efficiency of that datacentre. But obviously, there is peak demand. At the end of the month, the quarter or the year, when a new product or service is introduced, additional capacity is required. If we do not provision the datacentre for such peaks (which would reduce the overall efficiency of the datacentre, in other words, increase its relative cost), this has to come from somewhere. And here is where the public cloud plays a role. One could “burst out” to the cloud. Obviously, there are workloads the business would not like to see leaving the enterprise perimeter. These may include financials, forecasting, order processing etc. But there are probably no issues to have the website or partner interactions to migrate to the cloud for a period of time. This implies policies are in place as to which workloads can and cannot migrate to the public cloud. The service configurator would decide which resource pool would (private or public) be tapped at the moment the service is provisioned. Ideally one would expect to be able to move workloads on the fly (while they are being executed). However this requires software stacks that are compatible as one would expect to be able to run the same VM in both environments. As no standards are available to date, this limits the possibilities.
Being able to run services, both in its own datacentre (private cloud) and in the public cloud, requires attention in the data area. Indeed, where is the data located? Two scenarios can be envisaged. Either the data is small in volume and is transferred in the cloud with the workload, or it is large and a copy needs to be kept in the cloud. This however implies additional costs, as the maintenance of data in the cloud does not come free of charge.
Some enterprise grade cloud services allow bridging the enterprise intranet with the cloud service through leased lines. This may allow workloads located in the “public cloud” to access the same data then when they are run in the datacentre.
There is another reason why companies should look at hybrid scenarios and I will cover that in my next blog. In “cloudifying” themselves, enterprises will have to run a mixed cloud/non-cloud environments, and this should happen in a way that is as transparent as possible for the users. But I’ll address that next time.