Partially as a result of the recession and perhaps part of the business as usual model, optimization is once again a mantra in corporate America and IT. Cost reductions and cost avoidance have led us once again to the concepts of continuous process improvements. During the economic crisis, much of our planned optimizations were out paced by sheer constraints to our budgets. Now as the economy strengthens, is there a choice between optimizing and innovating?
As always, the opinions presented in this blog are mine, and not those of my employer.
I depart a bit from the ususal discussions about technology and lifecycle in this posting to chat about the continuous process improvement planning we are all going through. Windows 7 and this upcoming technology refresh cycle is having us plan at a more detailed level than previous cycles. Optimization plans, based upon my experience, seems to yield typcially a range of 10% to 15% year to year improvements. While this is not trivial by any stretch of the imagination, management today and our companies seem to want the "step" change that is derived from innovating.
Innovating in client computing suggests that not only will the end user experience change, but also the related economics. This is one of the challenges we in IT face today. We have been optimizing year to year for quite a while, not only have we been doing more with less, we have been doing more in many cases with much less. At some point the benefits of optimizing become the realm of diminishing returns. At that moment, innovation becomes the best alternative.
Many businesses in client computing seek "the next big thing", the "step change". In my opinion, one of the greatest challenges facing IT today is innovating while optimizing. The day to day rigor will simply not wait while we innovate.
In my working with many businesses recently, we have focused actually on two plans. The business as usual optimization, and a separate innovation plan. The assumption is that as we optimize in this refresh (as an example) IT will request funding from the savings to subsidize innovation. Pilots and proofs of concept do cost dollars whether it is products, services, or resources. In an earlier blog I commented that client lifecycle management works the best when it is delivering measureable savings that could impact the net IT spend.
Given the economy, it may seem to be expedient to continue optimizing and defer investing in innovation. Just as we have learned in technology refresh cycles, deferring decisions quite often result in a "big bang" or a more significant scope to address. That may be the case in terms of innovation.
For client lifecycle management, innovation could be represented by a myriad of considerations ranging from PC as a Service, virtualization, cloud computing, standardizations, user segmentation, consumerization, and so forth. Webster defines innovation very simply as "something new, a product or service". Perhaps we make innovation too complex to be aligned with our tactical requirements. By this definition, innovation is something that is new to your business (in other words, not necessarily the "bleeding" edge).
At some point as the diminishing returns are achieved, innovation becomes perhaps the only viable manner to address continuous process improvement. In a tug of war between optimizing and innovating, over the longer haul- innovation wins.
The simple answer is that businesses need to do both, but resources can only be stretched so far, hence the need for two plans (or at least one plan with two unique sections).
What is your strategy for dealing with both optimizing and innovation? Is there a budget for both? Are the resoruces the same? These are just a few of the questions that we would like to hear from you.
Thank you for your particpation in this dialog.
User segmentation is not a new concept, it has been around for a while. What has apppeared to be lacking is an operational definition and a set of compelling reasons to adopt the approach. Given all that IT has to deal with the economy, delivering high service levels and a continuing focus on end user satisfaction, the timing may be optimal for the methodology to take off into the main stream. As with all of my blogging, the opinions and views expressed in the blog are mine, and not necessarily those of my employer.
For quite a while user segmentation has been in the background while other disciplines such as TCO and optimization were top of mind. Part of the issue, perhaps, was that the methodology was lost in the larger belief that across the board standardization was considered to be the single driver of costs. However, with innovation occuring at a frantic pace, today, there is technology that frankly did not exist in scale 2 to 3 years ago, and now can deliver cost benefits that may be greater than that of standardization. Think about virtualization scaling from the pilots and proofs of concept to the main stream, the ubiquitous handheld devices, the too many to count number of form factors that an end user could have. These are but a few of the examples.
Interestingly, user segmentation has been considered subjective or annecdotal, but we are all using it every day. A case in point- is there a company among us, that has the same service level for all of its workers ranging from executives to teleworkers to engineers. Of course not. And yet, in IT if the charge back system or allocation back to the end user departments is not able to be altered, there may be no consequence for departments, business users, or end users in making certain IT decisions. Yet IT still is required, usually at a lesser cost, to support these solutions. User segmentation has also "sufferred" in being considered the same as end user "profiles" as one could address in certain software tools, or within HR job codes.
So how does your business deal with this type of issue?
In my working with various businesses, it is becoming more apparent that due in part to the economy and due in part to risk associated with certain decisons, that user segmentation is now a main stream discipline. Closed Loop Lifecycle Planning, which is the name for my body of work, defines user segmentation as the optimal alignment of end user requirements to the suite of access devices, risk, costs, and service levels.
The conversation is no longer solely about enabling end users, but what is the most effective solution to improve the experience without changing the overall set of economics - particularly if there is cost reduction and cost avoidance available in bringing about the change.
The other driver that is making or will make user segmentation main stream is simply the device diversity and where, how and what is contained on the device(s).
The argument could be made not to adopt user segmentation which could be a logical decision. It is suggested that the cultural, social, or political rationale (or the economics) may play a large role in user segmentation adoption.
Given this as a backdrop, what are your thoughts about user segmentation as a main stream methodology? Has your business considered the approach and if so, has it been successful or has it been dismissed for any reason.
Past implementations were based upon user descriptions such as power users and knowledge workers, which would seem itself to present a challenge in terms of execution across an entire enterprise. Most businesses seem to have 6 or so predominant user segments that can be identified and managed in an manner that optimization could occur.
Have you seen the same experience in your business? My opinion is that user segmentation, is ready to be a prime time approach and where it works, it could work exceeding well.