Around the Storage Block Blog
Find out about all things data storage from Around the Storage Block at HP Communities.

The original open cloud stack - HP CloudSystem

CartoonCalvin100X100.JPG By Calvin Zito, @HPStorageGuy

EMC made an announcement earlier today around a product called VSPEX; I haven't had a lot of time to look at it Vellante conversation.jpgbut basically looks like they're working with the channel to offer reference architecture of a converged stack.  Of course it's all based on EMC storage though they claim they have reference architectures for other components to be something other than VCE-based Cisco servers and networking. 

 

What got my attention was a tweet I saw from Dave Vellante.  Dave is the CEO co-founder of Wikibon.org and co-host of The Cube.  I've got the conversation here - be sure to read from the bottom up.

 

I was responding to Dave to make sure that he knows that HP CloudSystem doesn't have to be based on HP servers, storage, and networking.  This is a huge part of our CloudSystem story so I thought if Dave didn't know about it, most of you probably don't either.

 

I reached out to Nick van der Zweep, Director of Business Strategy in our CloudSystem business to get the details.  I'll summarize a few things that Nick and I discussed:

 

  • HP CloudSystem DOES support multiple non-HP components.  Customers will get the most integration and automation by using a complete HP stack.   For example - if customers use non-HP storage, they lose the integration we have with Storage Provisioning Manager that simplifies provisioning storage.  See my previous blog post on SPM that includes a demo of SPM. For non-HP storage, a storage admin would have to provision a LUN and give that to the server admin so they can then use that within the CloudSystem environment.
  • You can use any standards-based networking gear that is supported by HP Virtual Connect
  • The HP CloudSystem Matrix 7.0 Compatibility Chart is a very useful document to understand the requirements of all the components.  Chapter 4 specifically talks about the storage requirements.  Page 9 talks about support for Dell and IBM servers.
  • There's also a technical white paper that talks about a lot of the details titled Understanding the HP CloudSystem Matrix Technology.  Nick tells me that this paper is based on v6.3 and an updated white paper for the current version 7.0 is coming.
  • HP does test the non-HP components - our testing includes doing multi-vendor testing of other vendors' server, storage, and networking hardware. This is basic testing to ensure compatibility.

I don't know if HP was first in our open cloud computing approach - but I do know that other vendors stacks (like vBlock and FlexPod) offer little if no flexibility.  I think the original vBlock only had 3 configurations and no deviation from those configurations were allowed or you lost the "one throat to choke" support of VCE. 

 

The storage, server, and networking landscape is a very interesting place to be these days!

 

 

Labels: CloudSystem| EMC
Comments
Jeramiah Dooley(anon) | ‎04-13-2012 05:04 AM

[disclaimer, VCE employee]

 

Yes, in November of 2009 when the original Vblock platforms were announced, there were three options.  None of those have been sold since late 2010.  Thanks for bringing that up though.

 

The larger problem I have with your premise is the comparison of a product like to Vblock to anything that hP is offering, including .  Saying to customers "Hey, if you use this server hardware along with block storage and some kind of networking we'll be able to bundle up these dozen or so separate pieces of software that we acquired at different point and use them to provide some management and workflow orchestration." doesn't quite cut it.  In this way, the hP solution looks more like the recently released IBM offering: with enough software and professional services, I'm confident you'll both make anything the customer has work.

 

Customers want an infrastructure that is easy to acquire, that can be operational in 45 days, that doesn't ask them to compromise on the quality of the components, that is supported and managed as an object and that has the ability to meet the requirements of even the most critical mixed business workloads.  They want all this while asking their vendor to take as much of the risk as possible off the table, and today only the Vblock offers that. 

 

I think we all respect hP very much, and there are many people who work there (like yourself) that I've been happy to meet and share a meal with, but comparing anything that hP offers today to the Vblock is wishful thinking.  If you want to provide customers with an apples-to-apples comparison, I'd suggest using the VSPEX platform, the new IBM platform or even the Dell vStart platform.  I agree with you that the Flexpod is too rigid, even just as a reference architecture, to compete in the same places that you are selling today. 

 

The landscape is definitely interesting, and I wake up every day excited to see what's going to happen next!  I apologize for my mini-rant here, but I think it's important to position ALL of our competitive offerings correctly.  Thanks for reading.

| ‎04-13-2012 06:57 AM

Jeramiah,

The premise of my blog post was one - clarify that HP CloudSystem has supported non-HP hardware from the beginning.  There's little to no comparision of what we're doing to vBlock but if I was going to do that comparison, it would be with an HP VirtualSystem - and even Chuck has acknowledged that one.  I always appreciate all perspectives here so thanks for dropping by.  BTW, since you left essentially the same comment twice, I only published the first one.

nate | ‎04-13-2012 07:14 AM

Was browsing through the compatibility chart - one thing that stood out to me is not much love there for iSCSI or P4000 ? I mean iSCSI is supported but not integrated, and only a passing mention of the P4000 -- a product that HP has pimped very heavily in the past for vmware deployments(before 3PAR of course).

 

Any idea when 3PAR will get round to supporting NPIV ? They didn't support it for the longest time, not sure if that has changed recently or not, don't see mention of it in the docs.

 

One solution I'd love to see is the ability to directly connect a Virtual connect module in a blade chassis to a 3PAR system. Eliminate the middle tier switching alltogether, look ma, no zoning!

 

I mean think about it, up to 192 host ports on the V800 ? Even if you go to 4 uplinks per chassis that is 48 blade enclosures, you can easily get 4TB of ram in an enclosure these days which comes to 192TB of memory. With those 16-core opterons that's 24,576 CPU cores. Or 10-core Intel procs weighing in at 15,360.

 

Say an average of 8GB per VM and that is nearly 25,000 virtual machines. Or 50,000 if you average 4GB (some of my VMs are down as low as 256MB - I used to do 96MB on occasion when leveraging 32-bit Linux). When a single VMware cluster only scales to 3,000 VMs or 32 servers.. You see where I'm going here. I mean just today I read an article about VMware cloud foundry and VMware saying that cloud foundry was meant for "large scale clusters using hundreds of VMs". And here I'm talking about 10s of thousands of VMs...

 

With 24,576 cores and 50,000 VMs that's barely 2 VMs per CPU core on average, your probably going to have tons of cpu left over.

 

Hook your networking up to the *cough* world's largest (yet still very compact @14U) cloud ethernet switch (two of em of course) - you could go nuts and go full bore 16x10GbE uplinks per blade enclosure and still have room to play (768x10GbE/switch w/20Tbps of N+1 switching fabric). With 16x10GbE per chassis you could probably turn on that mode on the virtual connect (forget what it's called) where it doesn't do any local "switching" instead sends inter-server communication up to the upstream switch. Toss in some M-LAG and you'd be able to maintain just an absolutely massive amount of bandwidth between the blades and switches. Or you could use less ports and scale out to more enclosures, whatever.

 

But since the **bleep** (bleep? really?) virtual connect modules rely on NPIV - and because (last I checked) 3PAR does not support NPIV such connectivity is not possible.

 

That doesn't stop me from dreaming though.

 

I was shocked, SHOCKED mind you when I was talking to Compellent recently and they said they do not support direct connect fibre channel on their arrays and never have. I bout shed a tear on the spot.

 

Is it HP's intention to work with 3rd parties or standards organizations to help cloud system better support 3rd party options? While the doc clearly lists things like Dell servers and IBM - as well as using protocols other than FC, it seems there's a lot of ..shall we say warnigs around those things saying the system isn't fully integrated with those technologies (even say iSCSI on a 3PAR platform - example only - can't imagine anyone building a cloud that chose 3PAR would decide iSCSI was their protocol of choice!)

 

I wonder when the flex fabric modules will be extended to support 40GbE uplinks - I mean you guys have the Infiniband modules already that do 40Gbit, and you have that massive 5Tbit backplane on the blade enclosure.. 40GbE is here and strong and really made for uplinks. And backwards compatible so you can use break out cables and get 4x10GbE out of each 40GbE port, a double win for customers.

 

jdooley_clt | ‎04-13-2012 03:10 PM

In that case, I agree. :-) I'm actually surprised at how much non-HP hardware is included explicitly in the CloudSystem documents.

 

Thanks for cleaning up the double-post, it was the forced registration that got me!

| ‎04-13-2012 06:34 PM

@Nate - I don't have answers to most of your questions but have asked a couple of experts (like 3PAR product manager) to take a look.  It's obvious you are very familiar with Virtual Connect and 3PAR -  we have some very very exciting items on the roadmap. You should talk to your rep and have them share more under an NDA.  I am sure you understand that we don't want to spill the beans in a public blog.  When I get the NPIV question and anything else I can publicly respond to answered, I'll let you know here.

@Jeramiah - And I'll tell you that everytime someone on the team mentions that we support 3rd party storage like EMC and NetApp I jump in and add - "we also support 3rd party servers with HP storage".  I hope the team that manages our backend blog makes the process of leaving a comment a lot easier (use some existing ID you have like Twitter) but I also know you can always leave an anonymous comment without signing in to our platform.  Take it easy and have a good weekend! 

Ganar Dinero por Internet Encuestas(anon) | ‎05-17-2012 11:58 PM

The larger problem I have with your premise is the comparison of a product like to Vblock to anything that hP is offering, including .  Saying to customers "Hey, if you use this server hardware along with block storage and some kind of networking we'll be able to bundle up these dozen or so separate pieces of software that we acquired at different point and use them to provide some management and workflow orchestration." doesn't quite cut it.  In this way, the hP solution looks more like the recently released IBM offering: with enough software and professional services, I'm confident you'll both make anything the customer has work.

 

Customers want an infrastructure that is easy to acquire, that can be operational in 45 days, that doesn't ask them to compromise on the quality of the components, that is supported and managed as an object and that has the ability to meet the requirements of even the most critical mixed business workloads.  They want all this while asking their vendor to take as much of the risk as possible off the table, and today only the Vblock offers that.

http//www.tecniinox.com

NickVanDerZweep ‎05-18-2012 04:50 PM - edited ‎05-19-2012 06:08 PM

Ganar - it takes 45 days to get a vblock up and running?  Ouch.  You can get a CloudSystem up a lot faster. In fact, not only can you get it up and running but you get a full cloud solution integrated into the customers backup systems, their service catalog populated, and more all in 30 days or less.

 

With vblock, all you are getting is a few hardware and element managers - from multiple vendors - and installation by a 3rd party. AND you are not getting any cloud software which you still have to purchase and set up separately.  Sounds like you get less and it takes longer.

 

Vblock DOES ask you to compromise!  You are required to have EMC storage, Cisco blades, Cisco servers, and VMware – no deviation from that.  HP BladeSystem is the #1 blade offering in the world and vblock does not support it! Check out the Gartner magic quadrant if you need proof of that.  Many companies have standardized on what they know is best of breed including ProLiant, BladeSystem, 3PAR, etc.   However, if those same companies want to use EMC storage or Cisco networking we support that.  Thus HP CloudSystem supports best of breed as defined by the customer, not a vendor/integrator.  No compromise.

 

When you get an all HP version of CloudSystem, it is deeply integrated. When you get one with some non HP servers, or storage or networking you give up a few features but the resulting solution still dwarfs the capabilities and integration of any other offering in the industry. 

 

HP is known the world over as having extremely strong alliances.  Let’s take VMware for example - more VMware lands on HP servers than any other platform in the industry.  Why is this?  Well because we have done more integration with our hardware and management software than anyone else in the industry. 

 

Another example of our deep integration and alliances is HP Networking and our IMC management solution.  It is a single tool to manage the entire network whereas Cisco asks you to buy and install 31 separate tools to manage a network. And IMC manages not just an HP networking but Juniper, Brocade, Cisco, etc. It manages close to 6000 devices including 1400+ Cisco devices.  Your premise that HP buys companies and isn’t integrated is wrong.

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the community guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
About the Author
25+ years experience around HP Storage. The go-to guy for news and views on all things storage..
About the Author(s)
  • 25+ years experience around HP Storage. The go-to guy for news and views on all things storage..
  • This profile is for team blog articles posted. See the Byline of the article to see who specifically wrote the article.


Follow Us