Sorry Archimedes for leveraging your quote. Kfir Godrich wrote a post earlier this month asking How Green is the Cloud World? Kfir discussed the impact of cloud on an organizations environmental impact. Which reminded me of a few posts I’d done in the past:
- The first about the misunderstanding about the role of savings in the Green IT movement – Why does so much of Green IT miss the value of the NegaWatt?
- And one titled: Is the focus on “Green” too narrow?
There is part of another post from almost a year ago that may bear repeating:
One thing to keep in mind is that when looking at the computing resources in an organization, a whole range of assessments are required. The following is an excerpt from the chapter I wrote in The Next Wave of Technologies book came out in 2009:
"For each one-hundred watts of power that is dug out of the ground in the form of coal, a significant portion (as much as 60 percent) is consumed in the power plant itself. Another ten percent is lost in the transmission process to get the power to the data center. This is one of the reasons many organizations are moving their data centers close to power plants. They are usually given power at lower rates, because they do not have to pay the transmission line penalty.
Once the power arrives at the traditional data center, another thirteen percent of the original energy is consumed by cooling, lighting and power conditioning. New techniques in lighting and data center design are being applied to address this area. A technique that is being investigated in many organizations to cut down the loss in power conditioning is the Direct Current (DC) only data center, but this requires special approaches to both powering the equipment as well as the power supplies in the equipment itself.
Now that the electricity has finally reached the servers, the power supplies and fans consume another eight percent of the original power. This leaves approximately nine of the original one-hundred watts available to do work. Most computers run at fifteen percent efficiency or less, so that leaves approximately one watt to apply to applications. If we take out hold over, obsolete applications that do not add value, and inefficient processes that are in most businesses, hardly any real business value comes out of that initial one-hundred watts of power.
Understanding the portfolio of hardware and software and weeding out the dead wood can radically improve the carbon footprint of an organization's IT investment. If you were able to kill off a parasitic application that has stopped generating value and save one watt worth of power at the server, it actually can save one-hundred watts worth of power being pulled out of the ground. Some examples of how power can be saved at every step have been included. However, keep in mind the effect of saving power on the right side of the diagram is amplified by the supply chain all the way back to the start."
Improvements can be made at every level, but it is clear that the leverage is higher for changes made on the right side. This kind of environmental impact needs to be part of every cloud strategy – IT organizations need to reap the benefits to the business, wherever they can be found.
Many organizations are realizing their application portfolio isn’t living up to their current/future requirements and need to determine how to approach the problem of separating with wheat from the chaff.
A technique we’ve used for years to focus our efforts is using the consultant’s favorite tool – a quadrant chart. Each application is assessed using a few dimensions. Two of the most prominent ones are looking at the application’s technical quality and its impact on business value.
Measuring technical quality can be tough. A mixture of qualitative and quantitative measure of the architectural alignment to the future direction (in this case the cloud could be one way) should be possible. Is it service oriented? Can it run on a cloud environment? Is it multi-threaded? What’s the on-going bug count like?
Measuring the business value has its own set of issues. You need to be able to consistently measure the application’s positive impact on revenue or its reduction to risks and expenses for the organization. This is one of the areas the Green IT crowd seems to overlook with their focus on hardware.
The on-going maintenance costs take away from the business value too -- if it’s inflexible or requires a high cost support environment those are real costs. Some people view that maintenance cost as a measure of technical quality. It could be, but it also takes away from the business value.
Now there are numerous books, articles and approaches to tackling the problem, so at least you don’t need to start with a blank sheet of paper. Looking at the applications portfolio is a foundational element to any IT transformation, since the applications are what facilitate the generation of business value. HP is definitely placing focus with helping organizations assess their applications as well.
The first Earth Day (in the US) took place on April 22, 1970. It was founded by U.S. Senator Gaylord Nelson as an environmental teach-in and is now a catalyst for advances in environmental policies and perspective.
There are many ways that IT can help organizations, by providing better measurement, analytics and visibility to how energy is being consumed and waste produced. I had a post previously that discussed the various levels where sustainability change can take place - even within IT. This seems to be a reoccurring theme within IT organizations, probably because the impact of IT is so broad and deep in this space. The whole Green IT movement reinforces this perspective, but could still be expanded to view the problem holistically, since it needs to address more than just green data centers.
One of HP's seven corporate objectives is global citizenship and sustainability is part of it, and some people think we're going a good job. There is actually quite a bit of research in HP Labs related to innovating for the environment. Green up.
About a week ago I had an interview with Doug Oathout about converged infrastructure. Since there were so many people who read and were interested in the topic, I thought I’d follow it up with another interview with Duncan Campbell.
One of the areas I keep coming back to is application portfolio management and modernization, since the calcification of spending under the weight of successful systems is a problem for almost every organization. I was talking with some folks the other day in the applications area of HP about a new study that Forrester conducted on behalf of HP in this space. There are some interesting points of view coming out of the survey.
A couple items that I found intriguing were the perspective on the high-cost and lack of funding, since one of the objectives of this kind of activity should be greater functionality at lower cost, it made me wonder if it was a bootstrapping or a confidence issue. Some good prototypes that are lower risk but still visible would help organizations understand the impact and the risk as well as provide some savings. I always tell people that if you do a pilot in an area that no one cares about, no one will care about the results. On the other hand, you don’t want to risk the entire organization. Identifying valid prototypes is a skill that can’t be overestimated. There are other barriers to these modernization activities.
I also found it interesting that the survey stated that 1-20% of applications in organizations should be retired. This validates a previous number I’d heard from a CIO who went into an organization and stated that any system they couldn’t find an owner for in 90 days would be turned off. They ended up turning off 15% of their systems, and after that only 2 systems had someone come back and say they couldn't turn it off an they’d own it. For the people concerned about green IT, this gets back to the whole negawatt concept mentioned in another post.