And maybe it’s the other way around too. HP continues to innovate with Virtual Connect - there are already over 6.5M ports sold – and the rate of adoption is accelerating. Why? ...because the benefits from implementing Virtual Connect in your infrastructure are evident. Customers in almost every line of business are endorsing Virtual Connect because of these benefits. To learn more about the simplicity and flexibility that Virtual Connect offers when connecting systems to LANs and SANs, visit Virtual Connect on HP.COM.
With Gartner just releasing their latest Magic Quadrant for blades, it seems appropriate to write a bit more on BladeSystem. We are both pleased and proud that Gartner placed HP in the Leaders Quadrant of the Magic Quadrant for Blade servers, for both completeness of vision and ability to execute.
by Jim Jackson, About Converged Infrastructure
About This Poster
Hi, I’m Jim Jackson. I’ve recently joined the HP Infrastructure Software and BladeSystem team as the Vice-President of Marketing and want to take this opportunity to introduce myself and briefly explain how our new strategy ‘converges’ virtualized compute, storage and networks with facilities into a single shared-services environment to accelerate standardization, reduce operational costs and accelerate business results.
About Converged Infrastructure
Last week HP unveiled its vision for the datacenter called Converged Infrastructure. I’m sure you’re aware that HP BladeSystem is one of the key infrastructure components to build and optimize your next generation data center. Add Virtual Connect and Insight Software for networking and management and you have the foundation our customers have been relying on for years to get better business results. Our approach works with your existing infrastructure investments and prepares you for what’s next. That’s what we mean when we say HP fits within your existing datacenter. What HP is delivering today is what our competitors and their bolted-on offerings only have on the drawing board.
What the Competition Doesn’t Want You to Know
I’ll be back later this week to explain more about how BladeSystem with Virtual Connect and Insight Software puts us ahead of the competition, hands-down. Let’s just say it’s what the newbies at the edge of the network don’t want you to know. In the meantime, we welcome your comments about how to deliver the converged infrastructure and what it means to you.
Last week at IDF, two Intel technologists spoke about different fixes to the problem of compute capacity outpacing the typical server's ability to handle it.
For the past 5 years, x86 CPU makers have boosted performance by adding more cores within the processor. That's enabled servers with ever-increasing CPU horsepower. RK Hiremane (speaking on "I/O Innovations for the Enterprise Cloud") says that that I/O subsystems haven't kept pace with this processor capacity, moving the bottleneck for most applications from the CPU to the network and storage subsystems.
He gives the example of virtualized workloads. Quad-core processors can support the compute demands for a bunch of virtual machines. However, the typical server I/O subsystem (based on 1Gb Ethernet and SAS hard drives) gets overburdened by the I/O demand of all those virtual machines. He predicts an immindent evolution (or revolution) in server I/O to fix this problem.
Among other things, he suggests solid-state drives (SSDs) and 10 gigabit Ethernet will be elements of that (r)evolution. So will new virtualization techniques for network devices. (BTW, some of the changes he predicts are already being adopted on ProLiant server blades, like embedded 10GbE controllers with "carvable" Flex-10 NICs. Others, like solid-state drives, are now being widely adopted by many server makers.)
Hold on, said Anwar Ghuloum. The revolution that's needed is actually in programming, not hardware. There are still processor bottlenecks holding back performance; they stem from not making the shift in software to parallelism that x86 multi-core requires.
He cites five challenges to mastering parallel programming for x86 multi-core:
* Learning Curve (programmer skill sets)
* Readability (ability for one programmer to read & maintain other programmer's parallel code)
* Correctness (ability to prove a parallel algorithm generates the right results)
* Scalability (ability to scale beyond 2 and 4 cores to 16+)
* Portability (ability to run code on multiple processor families)
Anwar showed off one upcoming C++ library called Ct from RapidMind (now part of Intel) that's being built to help programmers solve these challenges. (Intel has a Beta program for this software, if you're interested.)
To me, it's obvious that the "solution" is a mix of both. Server I/O subsystems must (and are) improving, and ISVs are getting better at porting applications to scale with core count.
Today, we launched some new and updated technologies that I thought you might find useful. Here's the news in language as plain as it gets in IT.
Our Virtual Connect interconnect porfolio was updated in two ways.
- First, the Fibre Channel module was updated to 8Gb performance. That means you get more connections at higher speeds. The server side NPIV feature also increased support for up to 255 WWNs (world wide names) so you can support more VM's per module. This also means lower network costs per virtual machine. We also published a new Virtual Connect Fibre Channel cookbook that most folks find very useful. Get all the technical tips, tricks and best practices to help you set it up right and get the most out of Virtual Connect.
- Second, the the Virtual Connect Flex-10 module now supports multi-enclosure stacking which means you can connect up to 4 enclosures into one group and cut down the number of Ethernet cables to as few as two per rack. The number of total domains that can be managed together was also increased to 200 (or 800 with stacking). This makes it easier to manage more connections across a big environment in a single view.
The other news today was the update of our 4 processor, ProLiant BL685c blade server. Basically, the new G6 version dobules the supported memory per blade versus the older, G5 version. That's 32 DIMM sockets and 256Gb of memory per blade. We made this move to remove a key bottleneck -- memory performance, to btake advantage of the new quad-core AMD processors and to support more VM's or big applications per blade.
If you have more questions, leave them below and our experts will fill you in on all the technical details. In the future, visit this site to keep up to date with other blade news.