I’ve been reading about the complexity of IT a lot in the news. Nobody planned for things to get complex. It may have started with a strategic acquisition or a new product introduction. Possibly with a decision to allow your business units to make some of their own “technology” decisions. More recently, budget restrictions have delayed investments required to upgrade or replace aging systems that require much greater efforts to maintain. Over time, these “one-off” purchases and projects start to look like isolated islands, not a single cohesive IT environment. Complexity has crept up on us gradually and with the ability to purchase “on demand” services, that complexity is compounded.
However we got here, it’s clear the challenges we face today are not going away, if anything, they will continue to increase with a direct impact on the following:
- Increased support and operational costs: It’s expensive to work with and integrate multiple services from multiple service providers. Each has its own standard to deliver services and it’s up to the IT organization to make sure the “back office” works as seamlessly as the “front office”. And in some cases, you didn’t pick the service or service provider, but are stuck with how to make it work. All this adds up to more complexity and more cost.
- Increased risk: Managing a complex infrastructure, delivered by a multitude of service providers, has many more points of failure. More things can go wrong and it’s much harder to fix them when they do. Add your own legacy environments and you have a recipe for disaster, literally. The potential for business-stopping outages increases dramatically within this type of multi-vendor environment.
- Decreased agility: It seems at odds with what we hear today about “on demand” services, but when resources are consumed in keeping the all the services and service providers in sync, it’s difficult to be responsive to the long term needs of the business. Then when change is required, complex relationships and competing requirements become a bottleneck to making things happen.
If you have struggled with infrastructure complexity challenges, know that you are not alone. Most enterprise organizations continually struggle with the on-going balancing act of creating the optimum computing environment (balanced for cost and operational efficiency) and giving their users the flexibility to “drive” innovation. All organizations of any size wrestle with complexity. But there are three things you can do to decrease complexity:
- Continuously push to standards: Decreasing the number and types of infrastructure components will minimize variability, provide more predictability, and enable compliance while reducing dependence on specialty skills that are hard to find.
- Automate everything: Automating infrastructure management processes including asset tracking, application provisioning, patching, code deployment, rollback, monitoring, failover, and assigning computing resources will save you time, decrease costs, increase consistency, and improve reliability.
- Consolidate across the board: An initiative that results in reducing the number of servers, storage systems, network devices, processors, applications, databases, or sites will deliver cost savings and simplify manageability. The immediate impact on IT budgets makes consolidation a compelling way to fund future infrastructure improvements.
We can sometimes do all the right things and still end up with a very complex IT environment. One that costs too much to maintain is hard to change and is creating all sorts of risk for the organization. Just like most things in life, the answer to the problems is usually getting back to a few basic principles – in this case, three – Standardization, Automation and Consolidation. I believe getting back to those three basics will do wonders for eliminating complexity within your IT environment.
For more information on how HP can help, click here.