Today, HP Launches Industry’s Most Complete Software-defined Network Fabric for Cloud. This network fabric is built on HP FlexNetwork architecture, enabling business agility for clients by delivering two times greater scalability and 75 percent less complexity over current network fabrics while reducing network provisioning time from months to minutes.
This is possible by:
- Improving IT productivity by unifying the virtual and physical fabric with new HP FlexFabric Virtual Switch 5900v software, which, in conjunction with the HP FlexFabric 5900 physical switch, delivers advanced networking functionalities such as policies and quality of service to a VMware environment. Integrated Virtual Ethernet Port Aggregator (VEPA) technology provides clear separation between server and network administrations to deliver operational simplicity.
- Reducing data center footprint with the HP Virtualized Services Router (VSR), which allows services to be delivered on a virtual machine (VM), eliminating unnecessary hardware, by leveraging the industry's first carrier-class software-based Network Function Virtualization (NFV).
As organizations move to software defined networks, some fundamental changes in the approach will be required and these products are a start down that path. Here is a video with a bit more high level discussion and some details:
The HP Moonshot System is leap forward in infrastructure design that addresses the speed, scale, and specialization needed for a bold, new style of IT.
HP ProLiant Moonshot servers are designed and tailored for specific workloads to deliver optimum performance. The servers share management, power, cooling, networking, and storage. This architecture is key to achieving 8x efficiency at scale, and enabling 3x faster innovation cycle and bringing thousands of cores on target for projects. It uses 86% less energy, 80% less space, 77% less cost and is significantly, less complex to install and maintain.
After talking with other technologists, I believe that it is a start down a path that will change both how software is written and how solutions are envisioned. When I look at the initial product data sheet, I see a 4.3 U chassis that can hold up to 47 server cartridges. As the processing capability improves so can the cartridges. A full rack of these will replace the computational capability of whole data centers just a few years ago. Granted it excels at certain type of computing needs.
As the HP Pathfinder Innovation Ecosystem improves and continues to bring together leadings partners, a broader set of problems can be addressed:
This means having access to the latest technology and solutions at a groundbreaking, time-to-market pace measured in months rather than years. I can’t wait to see what next big thing will spring forth from this.
With all the examples of disasters with Sandy & blizzards… this week, it reminded me of the IT calamities I’ve encountered or discussed with people who were at the sharp end of the stick.
Recovery failures take place even for organizations that plan and test their disaster recovery plans on a regular basis. It is not a lack of preparedness (in the traditional sense) that catches organization off guard, but a lack of imagination about what can really go wrong. Some of the big problems I can recall were:
- When 3 Mile Island took place, EDS had a large data center downwind in PA. What do you do when the National Guard comes in and tells you to drop everything and leave? You can’t get tapes. You can’t start jobs. You just have to leave. This makes you look at disaster plans a bit differently from then on.
- During Katrina there were situations where an organization’s data center physically was OK, but there was no power and not likely to be any soon. The off-site backup is underwater. Some extreme measures had to be taken to fly someone into the data center and get the latest information and move it somewhere else for the processing to be performed. This situation can cause you to think differently about geographic diversity and how real-time is time enough?
- Let’s not forget one of the issues that hit Fukushima was not that they didn’t have sufficient backup cooling, but that their pumps were flooded by the tsunami. Sometimes it may not just be one disaster you need to deal with.
It is the variety of personal experience and interaction with a network of experience that can be useful in pointing out flaws.
Once I was part of a team brought in to look at a client’s data centers. They were very proud of their disaster recovery center, until we pointed out that the failover site was in the flood plain below a dam, near an earthquake fault and also close to where wildfires regularly take place. It would likely be most vulnerable when they needed it. They’d never really thought about it quite that way before.
That’s one reason why the disaster situations you test for disaster planning should come from someone outside your normal organization. Shake it up, move it around, get some imagination through diverse perspectives, rather than just counting on someone to think up a good case.
Interop is usually thought of as more of a networking event, but this year almost all the keynotes… were cloud focused. The vendor exhibition area was probably 60% networking and 40% cloud focused.
Last night I was at an Interop gathering talking with a number of folks from Canada about cloud deployment. One of them used the computing is turning into a “utility” like the electric company analogy that I now avoid. Some people believe that organizations will stop having data centers and off the cuff say “we don’t generate our own power anymore – we just consume what the utilities provide”.
To this I say “Utilities provide 60Hz and 120 volts (or more, at least in the US) and if that doesn’t meet your needs, it is up to you to convert it into something else, e.g., DC.” That’s not exactly what we expect from computing environments -- at least once you look a bit deeper. There are numerous attributes that have different costs, there are licensing implications as well. Every project, business process or organization has their own unique requirements. Sure some of them can be aggregated together into a virtualized private cloud, but it will be quite a while before we see all the computing needs being met by a standard computing environment.
At the same time as computing is standardizing… the electric power generation is actually decentralizing (to a small extent) with solar panels and windmills. So they may be become more similar, just from two different directions. Power utilities are coming from being highly regulated and centralized and computing from a relatively unregulated and distributed direction.
In our discussions at Interop today we asked the crowd “How many people are looking at a way for cloud to cut costs?” – there were actually few hands that went up. When we asked “How many people are looking for it to increase flexibility?” Significantly more went up. That’s good, since that whole life cycle issue for cloud environments isn’t necessarily focused on cutting costs.
Every 7-10 years, technology development and delivery undergoes a fundamental shift that opens up new business models and value generation opportunities. These shifts fundamentally change the way that technology is consumed and the value that it can bring. These shifts change what is possible and break down the barriers to innovation. Today, mobility, consumerization and cloud computing are signposts that mark the shift that is underway.
So what is the implication for IT?
- Opportunities - but at the same time, risk
- Agility - but at the same time, a need for control
- Flexibility – but also the possibility of lock-in
The HP cloud offerings announced today are a start down the road to changing the way, infrastructure is built, applications are developed, services are defined and information is delivered. I will not cover the details in this post (you can see the press release for the official view), but focus instead on some of the underlying philosophy.
Early adopters of cloud services have found these techniques can provide both an improved “time-to-value” as well as cost flexibility. Today, many mainstream organizations see cloud services as a key delivery model that can increase their ability to address organizational objectives in a demanding and unpredictable world -- a world where a major constraint is the number of seconds in a day. A world where cloud enabled practices can be a cornerstone of their ability to gain access to the right IT services, from the right places, at the right time, at the right cost; and create the means to speed innovation, enhance agility and improve financial management.
HP believes organizations will need to implement a hybrid delivery strategy that will leverage cloud services as part of their IT delivery and consumption strategy. To make this happen, HP’s focus is on enabling choice, not making choices for organizations. Hopefully everyone recognizes that if you have a well understood set of computational requirements that are stable and consistent, it is better to own these capabilities – in those cases, “the cloud” will not be cheaper similar to the reason why it can be cheaper to own a home rather than rent one. So our view is that new more flexible solutions will be combined with traditional means to build and consume IT services. HP is also trying to continue to support a market where there are leaders and laggards in the adoption of cloud, so one size cannot fit all organizations.
In order to deliver on the promise of the cloud and hybrid delivery, where everything has to be sourced and assembled at will -- information in all forms must be harnessed and exploited from inside and outside the enterprise in a secure fashion. This is an area where HP is performing research and development, since there are still many unknowns about the best way to address this need. This flexibility demand creates an environment where the IT mix can rapidly shift as organizational requirements change. Naturally, this will require changes to how software is architected and written. Application development and operational infrastructure must be visible, accessible and manageable in a consistent manner. Standardization must be in place to allow portability of services across deployment models and reduce lock-in.
HP’s approach to address this area is called the Converged Cloud - providing the unconstrained access to IT resources that organizations require, fulfilling their objectives. HP Converged Cloud provides access to “Infrastructure Anywhere,” “Applications Anywhere,” and “Information Anywhere.” Today’s announcements are just the start of a whole series of offerings and services we’ll be hearing more about over the coming weeks.
HP will deliver the HP Converged Cloud experience across four key customer scenarios. The typical journeys that customers tackle to fully embracing the cloud:
- Making it safe for corporate developers to unleash innovation for mobility and consumerization while leveraging public cloud infrastructure safely and securely throughout the service lifecycle
- Cloud-enabling existing data centers beyond virtualization to include automation and full hybrid delivery
- Cloud Services provisioning from infrastructure, application, network, information, and SaaS-centricity standpoints.
- Sourcing new virtual services from outside the enterprise that deliver the information in the context of your enterprise, and then consumption of that information directly by the user, application or business process.
HP’s Converged Cloud will be underpinned by a single architecture built on proven, industry-leading Converged Infrastructure (Servers, Storage, Networking) and new Converged Management and Security software (Automation, Management, Security); and combined with enterprise-class, hardened open source technology (OpenStack) to deliver an enterprise-class IT service delivery capability which delivers the flexibility and choice the industry demands.
If you are on twitter you might try to watch the tag #convcloud on Thursday starting at about 1PM EDT. There is a twitter chat scheduled. I know if I have 2¢ to add, I’ll chime in.