Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

HP BladeSystem Matrix and UCS - Apples and Oranges

Recently a Cisco reseller blogged about the UCS platform comparing it to HP and the blog contained incorrect and misleading information so I wanted to respond to set the record straight. 


The blogger attempts to compare the Cisco UCS blade offering to HP BladeSystem Matrix which is an apples to oranges comparison. The accurate comparison would be to compare UCS blades to HP BladeSystem itself.  HP BladeSystem on its own is the superior platform for data center virtualization.  BladeSystem is more scalable in compute, memory and IO and has advanced Insight Control management capabilities including physical/virtual server management and advanced power management not found in UCS. BladeSystem is also more compatible with existing datacenters than the UCS offering.  BladeSystem is the go-to platform for virtualization and due to our scalability, systems management, power management and wide range of partnerships its sales dwarf the sales of UCS with statistics such as:

  • 56% Blade server revenue market share and growing
  • 2 million HP blade servers sold
  • 24% of all 10GB ports in the industry sold being on BladeSystem with Virtual Connect
             - source Dell’Oro Group
  • 3 Million Virtual Connect ports sold
             - Contrast this with Nexus where Cisco says they have only sold a total of 1M 10GB ports


And HP is extending its lead.  With our latest announcements, HP BladeSystem is able to support up to 4x more virtual machines per blade than any other competing offering in the industry and with HP Virtual Connect FlexFabric we provide the world’s simplest, most flexible way to connect servers to any network while reducing network sprawl by 95%.  We announced 7 new G7 blades ranging from our HPC BL2x220c HPC blade with built-in Infiniband and Virtual Connect Flex-10 to our BL680c terabyte blade with built-in 60Gb of Virtual Connect FlexFabric IO bandwidth and ability to expand to 192Gb of IO.  These new blades are driving new levels of virtualization efficiency. In contrast, Cisco’s newest blade, the UCS B440 M1, is only able to support a maximum of 256GB of memory and 40Gb of IO which significantly limits their ability to support virtualized environments.


With respect to HP BladeSystem Matrix, its value has no equivalent in the UCS offering. BladeSystem Matrix is a highly-automated and specialized management software environment that spans the server to the network fabric to the application. BladeSystem Matrix brings functionality to the table that is simply not included with UCS such as:

  • A self service portal able to deploy entire application stacks in one, automated operation i.e. all the infrastructure (physical servers, virtual machines, networks, storage) and now even applications.
  • Capacity Planning and Consolidation tools
  • Disaster recovery capabilities allowing applications and storage to be failed over to datacenters over any distance in the case of a site failure.


Due to its advanced capabilities, HP customers are leveraging BladeSystem Matrix as the backbone for their cloud implementations. ISVs are also investing in this technology and have provided best practice BladeSystem Matrix deployment templates to allow for simple pushbutton deployment of all the infrastructure for their applications. And, we now enable private cloud automation with the industry's first one-touch, self-service provisioning of infrastructure and applications through integration with HP Server Automation.


It is unfortunate that this misinformation has surfaced again. Some months ago HP offered this blogger access to HP experts so that he could better understand the competitive landscape and correct his inaccurate comparisons and misinformation, but he elected not to take us up on our offer.  If he would look closer, he would see that none of the UCS features he touts in his comparison table require the advanced Insight Dynamics management or services that come with BladeSystem Matrix. 


For more info you can read The Real Story about the Cisco UCS .


I look forward to hearing your thoughts on this topic.

Joe Onisick | ‎07-22-2010 10:20 PM

Let me begin with full disclosure:  I work for an HP and Cisco partner and my role focuses on Unified Computing architectures in all shapes, forms and flavors including UCS, SMT, vBlock, HP C-class and Matrix.


The author of the blog you are referring to also works for both a CIsco and HP reseleer from what I understand.  Additionally he clearly states why he compared UCS directly against Matrix, and that is because the industry continues to do so and HP continues to do so when convenient for spreading FUD.  I can point you to any number of HP documents you'd like that switch in and out of comparing Matrix, and C-class against UCS and or vBlock when it works better for the HP point.  This post is nothing more than stepping on a soap box to tout the benefits you percieve in the HP Matrix system, the only one of which you back with fact is server market share.  HP has more server share than Cisco, really?  Congratulations how long you been selling servers?  This type of unbacked writing of Cisco off is exactly why Nortel is near death and Brocade can't find a buyer.


While I'd love to refute the technical statements you make without providing data or evidence I don't feel the need, this article will server those who've already chosen HP, and will have no effect on the rest.


By the way how many Matrix systems have actually been sold (let's not count the ones sitting in partner demo labs like mine?)

NickVanDerZweep | ‎07-23-2010 04:21 PM

Thanks for the comments on my post Joe.


I am certainly not writing Cisco off , we welcome the competition in the market  I am just requesting that we have an apples to apples compare. It is HP’s position that an apples to apples compare is BladeSystem vs. UCS. The only time HP should bring HP BladeSystem Matrix into a conversation such as this is when the situation or customer need requires ‘oranges instead of apples’.   An example of this would be when the situation requires a self service portal, self service infrastructure, capacity planning, and disaster recovery.


Lets delve further into the facts that I presented in my blog. Let’s take the BL680c G7 blade.  It allows for up to 1TB of memory and has built-in 60Gb IO that can be used for FlexNICs and/or FlexHBAs. Plus the IO can be easily expanded to 192 Gb. In fact, all of our G7 server blades come with a minimum of 20Gb of IO bandwidth and are easily expandable to much more. Contrast this to UCS blades which come with no integrated IO and at best 40Gb. With a blade like the HP BL680c we can host a huge number of virtual machines per blade reducing HW requirements by up to 73% and software licensing costs by up to 4x compared to other blade offerings in the market place. 


Another way to look at this is the number of VMs we can support per rack in the datacenter.  Leveraging recommendations from industry analysts and HP best practices the basic guidelines for VMs we use is that each VM should be allocated 300Mb of IO and 3GB of memory.  Using these guidelines you will see that HP BladeSystem with HP BL680c G7 blades can host over 4200 VMs per rack. Contrast this to the UCS B440 M1 that can only handle 1600 due to limited memory and IO.


This shows the architectural thought that was put into HP BladeSystem.  Virtualization has driven up the memory and bandwidth needs of blades and blade enclosures and we architected HP BladeSystem to address this. It is built for now and the future.  We do not have to deal with limited memory and limited bandwidth constraints.


As I re-read my original post, I can see your point about some of the stats I mentioned. For example, I should have given some context behind quoting our Virtual Connect and 10Gb port statistics. Those stats show that BladeSystem is much more than a server offering and shows off its network acceptance. HP BladeSystem is a showcase of HP Converged Infrastructure. People think of HP BladeSystem as a server platform but it is so much more. With it we bring together servers, networking, storage, power/cooling and management.


Lets continue the discussion.

Joe Onisick | ‎07-23-2010 08:45 PM

I 100% agree on fair comparison UCS directly compares with HP BladeSystem and Matrix compares with SMT or vBlock.  The reall issue is keeping the comparison there and there are still plenty of people flip flopping on that.  I appreciate the response and greater technical detail, that makes a much more compelling argument with some meat behind it.


Your VM numbers tell a great story for Matrix but they're also based on one set of rules and VMware varies with deployment.  Additionally we have to remember that the majority of customers aren't going to be out there maxing out 1TB of memory on a blade, anymore than they're typically stacking 384GB on a UCS B250.  I'm not even sure I could hand a customer a BOM with 1TB of memory per server on it without getting laughed out of the office.  What are we looking at there 300K in memory alone?


Would love to see your apples to apples take on Matrix vs. vBlock.



Arny | ‎07-25-2010 10:23 PM

while some think of hp matrix versus ucs as apples vs. oranges, many feel the same way about hp blades vs. cisco's blades - an old blade system trying to stay in competition vs. a brand new blade system... like old apples vs. apple pie.  A cisco employee recently blogged about the difference as a counter to the arcticle you reference above, the truth about  cisco ucs. here's the link.  i would like to know your thoughts on his perspective


Steve Kaplan | ‎07-26-2010 01:42 PM

I am the blogger mentioned in your post. As Joe Onisick said in his response, I address the comparison issue right at the start, and continue to feel it is 100% valid. In fact, unbeknown to me, searchdatacenter.com ran an article the day before my publication leading off with a statement comparing the two products.


In terms of access to HP experts, my only point of contact at HP other than Gary Thome was Jason Treu.  While Gary Thome personally and graciously took the time to speak with me after my first post, I know how busy he is and did not feel it appropriate to reach out to him directly. I instead contacted Jason Treu again per the following email.  I received no response.


From: Steve Kaplan
Sent: Friday, June 11, 2010 6:41 AM
To: jason.treu@hp.com
Subject: Questions for HP


Hi Jason,


Gary asked that I bring any questions to HP.  I am planning to write an updated post on UCS vs Matrix, and have the following questions:


1)      Ballpark # of Matrix customers

2)      Reference list of 3 customers to call

3)      Any information on the upcoming channel program for authorized channel implementation of Matrix

4)      Any other relevant updates/capabilities about Blade Server Matrix.





[contact info. including cell & office phone #'s]


ken.henault | ‎07-26-2010 02:01 PM



While Sean's blog was very funny, I feel he picked the wrong Ford to use in his metaphor.  I think an Edsel might have been more appropriate.  Then he could have touted the UCS automobile's innovations like the push button transmission and self adjusting brakes.


UCS has some unique features that look cool in a marketing brochure, just like the Edsel did.  But at the end of the day, these features are a distinction without a difference.




NickVanDerZweep | ‎07-26-2010 05:52 PM



Thanks for your note.  Unfortunately Jason Treu has left HP so that would explain why you did not receive a reply.  


Gary made the offer to work with you so that you had the details as to why we feel the proper comparison is HP BladeSystem and UCS.   That offer still stands. All of the UCS capabilities you call out in your blog are easily addressed by HP BladeSystem alone. Plus, HP BladeSystem has many features that make it stand out ahead of UCS.  Joe and I have been having a good discussion on a few of these in this thread.




P.S. I notice that the questions sent to Jason were all about BladeSystem Matrix. I urge you to take us upon our offer. I am confident that when you compare UCS to HP BladeSystem you will be impressed by what we bring to the table with HP BladeSystem.

NickVanDerZweep | ‎07-26-2010 05:53 PM



I agree. Apples to apples is best otherwise things get so very confusing.   


Hey - I don’t think you would be laughed out of the office by your customer if you showed up with a  proposal for a 1TB blade configuration. Let me explain.


Let’s assume that a customer needs to host 1000 VMs and that each VM needs 3GB of memory and 300MB of IO.  Yes, these numbers vary, for instance some of our cusotmers require configurations with IO redundancy. In those cases each VM would need 3GB of memory and 600Gb of IO. OK, if we do the math, the customer will need to make a total purchase of 3TB of memory 300Gb of IO to host this environment (600Gb for a IO redundancy configuration).

   - With a UCS configuration the customer would need to buy two UCS enclosures and 8 UCS B250 blades each with 384GB of memory and 40Gb of IO.

         - for a redundant config four enclosures and 15 blades are required due to UCS blades having 40Gb max IO

   - With HP BladeSystem the customer can purchase a single HP BladeSystem enclosure with 3 BL680c blades each with 1TB of memory and 100Gb of IO.

         - for a redundant IO config, they can simply add more IO to the three BL680c blades


As you can see, I don’t think the customer would balk at 1TB of memory in a blade because the amount of memory is the same in each case. Each configuration requires a total of 3TB of memory. However, the UCS configurations are a lot more complex than the HP BladeSystem configurations. 


Where customers tell me they have a problem with high memory x86 servers and blades is that they are scared that they are putting too many eggs into one basket. They worry that a single memory error will take down a much larger number of workloads because they have crammed so many VMs onto a single server.


With HP, we addressing this concern. We recently announced that our G7 servers have the ability to isolate memory errors so that only the VM or application using the problem memory fails and restarts.  That way, if you have 100 VMs on a server and you get a memory error, only the one VM using that failed memory has to restart.  Our customers like the idea of large memory servers, they just have told us loudly and clearly that this is a critical item to address with large memory servers.  In fact, we get so much demand for large x86 servers that we have gone even further than 1TB and created the ProLiant DL980 which can handle up to 2TB of memory per server.


It is hard to beat the HP BladeSystem. We have a huge experience across servers, networking, storage, power and management and we have applied that to the BladeSystem architecture.


With respect to your suggestion to have a more  detailed discussion around HP BladeSystem Matrix. That sounds like a good idea but I think we should do that in another thread. Let’s keep this thread focused on the apples to apples compare of HP BladeSystem and UCS.


This is a good conversation. Let’s keep it going.


Nick van der Zweep


P.S. In your reply you said “Your VM numbers tell a great story for Matrix”.  I think you meant to say “Your VM number tell a great story for HP BladeSystem” right?  The BL680c example you commented on is a HP BladeSystem example.

Joe Onisick | ‎07-26-2010 06:48 PM



My mistake on the Matrix mention, you're right I meant BladeSystem. 


Let's take your example of the UCS and BladeSystem configs both with 3TB of memory and leave all other assumptions.  Let's also assume I want the advanced management provided by the additional hardware and licensing of VirtualConnect in order to gain some of the management flexibility provided by default on UCS systems.  At this point I'm going to start hitting a different set of limitations on the HP BladeSystem and these are network based. 


By choosing Virtualconnect hardware to gain management flexibility I've greatly limited the number of VLANs I can use which will be a serious consideration with 1000 of VMs.  Additonally I will have networking and bandwidth constraints moving traffic from the VC modules to my access or aggregation layers, due to the way in which VC handles loop prevention.  My design becomes vary complex to ensure proper north/south traffic.  I will end up having several blocked 10G links throughout the system as well as multi-hop north/south traffic as I scale past the first chassis.


Addiitonally I'll be required to utilize seperate LAN/SAN switches to connect to FC or FCoE disk today.  Once FlexFabric is shipping I'll be forced to split SAN/LAN at the per chassis level increasing my upstream port cost and switch density requirements.  To add to this issue the FlexFabric switch is an OEM'd Qlogic switch which immediately adds SAN interoperability concerns as well as management and feature disparity between my HP chassis SAN management and my existing SAN (Most likely Brocade or Cisco MDS.)


As I scale I'll also be without the flexibility and risk mitigation provided by UCS firmware management, resource pools and templates.  I'll additionally have multiple layers of network management: traditional LAN, VC, VMware which UCS collapses onto a single platform for fluid compute resource deployment.


The dynamic flexibility and management will be a sweet spot with UCS for more customers overall than the max bandwidth and memory provided by the HP system.



ken.henault | ‎07-26-2010 08:21 PM



You make it sound like having options other than Virtual Connect is a bad thing.  When creating a general purpose architecture to fit a large number of use cases it is impossible to create a one size fits all architecture.  With a choice of IO modules for the blade enclosure it's easier to design a solution for a broadrer range of use cases.  Virtual Connect is not "additional hardware".  With any blade solution some type of I/O option is required within the blade enclosure.  Looking across the industry, HP offers the broadest range of options, and Cisco the narrowest.


As for uplink ports, I don't think UCS is going to save you any uplinks ports.  The minimum number of uplinks required to connect an rack full of c-Class blades to a LAN and SAN is one Ethernet and four Fibre Channel, then double that for redundancy.  That's 64 Gb of SAN and 10Gb of LAN for 64 servers (128 servers if using double dense blades).  I'm not sure I'd want to run my servers with much less than that,


As for VLANs, my understanding is that UCS only supports 242 VLANs  (https://supportforums.cisco.com/docs/DOC-5977).  Virtual Connect can be configured to support up to 320 VLANs per module.


When it comes to flexibility and risk mitigation, a UCS domain maxes out at 320 servers (ignoring the fact that current firmware doesn't support 40 enclosures).  By adding Virtual Connect Enterprise manager you can manage up to 16,000 servers in a single Domain Group. 


All of these big differentiators for UCS sound great until you go to build a working environment.  When you start bolting things together in the real world , they don't give you the advantage you'd expect.  Add in the security of buying from a company with decades of server experience and the broadest portfolio in the blade business, and HP becomes an easy choice.






Jeff Allen | ‎07-26-2010 11:39 PM

(Disclaimer: I work for Cisco as an engineer focused on UCS. My thoughts and opinions here are mine and not Cisco’s)


Hey Ken - hope you're doing well. I wanted to point out that UCS supports 512 vlans today, not 242. For more details, see this page: http://www.cisco.com/en/US/prod/collateral/ps10265/ps10276/data_sheet_c78-524724.html


As far as expansion beyond 320 servers, you mentioned that VCEM offers this. That's an additional per-enclosure cost and an additional product. I think Steve's point was that he wants to compare UCS vs. Bladesystem without add-ons. If we wanted to compare add-ons, I would throw our extra-cost Bladelogic software stack on there and match your quoted Bladesystem numbers. Also, to manage 16,000 servers in one domain group with VCEM, the customer would have to make certain that every enclosure had matching cable counts for both Ethernet and FC (the exact, physical ethernet  uplinks, physical FC uplinks, server mezzanines in the same slots, same Virtual Connect modules in the same locations, etc). So, not only do the bandwidth requirements have to be identical for all 16,000 servers, the customer is also required to install Virtual Connect FC modules in every chassis – even if they don’t need it on every chassis. The chances of having 112 servers that meet that requirement is unlikely, much less 16,000. So the customer would end up with multiple domain groups to get the job done and be unable to move VC profiles between those domain groups - so multiple "spare" blades would also be required in each domain group (which increases CAPEX costs). Also, VCEM is able to only move the Server Profile and its ~12 contained attributes. It doesn't allow me to actually manage these 16,000 servers – for example, VCEM will not allow me to add vlans or FC fabrics, upgrade fw, configure port mirroring, view port stats, manage iLO IP addresses, configure uplink port speed, configure stacking links, or even power on/off servers.


You can think of UCS (without any additional software or hardware) as providing all the functions of HP's Bladesystem Onboard Administrator, HP's Virtual Connect (Ethernet and FC), HP's Virtual Connect Enterprise Manager, and all of the functions of SIM (that anyone cares about) – in one nice package without added hardware and software costs to the customer.


Lastly, UCS would certainly cut cable count. As you pointed out, today's UCS fw doesn't support the full number of blade chassis. So let's compare what works today - up to 112 servers in 14 chassis. In a minimal (redundant) config, this would require the following uplinks:

2 Ethernet uplinks for Data

2 Ethernet uplinks for management

2 FC uplinks


And I don't have to change this as more chassis are added in the future. Now, whether a customer would want to "limit their bandwidth" in order to "expand their scalability" in this manner is their choice. If they want more bandwidth, they simply add more cables. But the question was if UCS can save uplink ports - and the answer is yes – quite a few actually.


Because of the “mini-rack” architecture in Bladesystem (along with Dell and IBM), to do this same config with Bladesystem would require over 5 times more uplink ports. Assuming stacking enclosures with VC, this would mean 7 enclosures, so I'd need 2 "stacks" requiring:

4 Ethernet Uplinks for Data

14 Ethernet Uplinks for management (OA)

14 FC uplinks 


As you can see, UCS gives the customer the choice to cut the cable count down on uplink ports quite a bit.


Hope this clears up any confusion.


Joe Onisick | ‎07-27-2010 09:15 PM



In my last comment I was stating the use of virtual connect in order to recieve some of the management capabilities that UCS has built in.  While VC isn't 'additional' hardware it does have hardware requirements, which means if I've chosen another option in the past and want the reduced management overhead of VC I'll have to buy new modules.


Additionally VC domains only use one active uplink to the aggregation/core per VLAN, this can be a port or port aggregate group/port-channel but only one active, the other will remain blocked to prevent loops.  In the UCS system with default/recommended settings there are no blocked ports up or down from the interconnects.  This means that less cables and ports have to be used to obtain optimal bandwidth.


As far as blade chassis I/O options are concerned, yes it's a good thing to have several options.  The difference is that with an HP system I have to select different hardware depending on the options I want (Cisco, HP, VC, FlexFabric.)  Within UCS the same I/O module in the back of the chassis will support me whether I want Ethernet, FCoE, iSCSI, NFS, etc.  It also does this using industry standards to provide flexible tunable bandwidth to each protocl/application, gaurantees without capping.


Lastly you are correct that the max scalability of a UCS system is 40 chassis under current architecture.  This is further limited by firmware which currently supports 10.  Beyond the firmware real world use cases will typically maintain 10-20 chassis max due to port count.  Using a full port/bandwidth configuration that leaves 80 servers under a single UCS management domain.  In order to scale beyond that point without adding management the customer has the option to use any best-of-breed management platform they choose, BladeLogic, CA, or others.  It would even tie into HP OpenView or IBM director through the open XML API.


With HP VirtualConnect if I want to scale a single management domain beyond 4 chassis (64 servers) I require VCEM which is a seperate server, license and platform no different from scaling up with BladeLogic or CA except that it is also HP branded.


The real reason UCS continues to get placed into an apples to oranges comparison with Matrix is that it adds functionality and management far beyond what the HP Bladesystem can do on its own without specific VC hardware and licensing.  For the majority of customers I work with management is their major issue, keeping control of an enterprise data center is no small task especially as virtualization density grows.  This is one of the major benefits of UCS that resonates well in all environments; resources can be deployed using repeatable processes on the fly from a single point of management, reporting, logging, RBAC etc are all contained in one place.

NickVanDerZweep ‎07-28-2010 01:55 AM - edited ‎07-28-2010 02:05 AM

Looks like we are getting some good interactions on this thread,


I the more I think about it, the more I see UCS network configurations becoming very complex when you put them in the context of production application needs. This is because any serious UCS configuration will require the UCS enclosure to use 8 cables connecting the FEX modules to FIs.  This means a lot of ports and FIs will need to be purchased and managed.  It seems to me like the UCS design of only allowing 80Gb of concurrent IO per enclosure is problematic.   I think customers that do go the route of using a UCS are going to find themselves putting a pair of FIs at the top of every single UCS rack to handle the production IO needs (and cable count) of the enclosures in the rack and no more.  This certainly will drive up network complexity.


As this conversation has progressed I also see that I messed up in my previous 3TB/300Gb example.  I mistakenly assumed that the UCS configuration could handle the requirement of 300Gb of active IO with 2 enclosures. One would need to spread the UCS blades over four enclosures with 32 cables (or more) to achieve the required 300 Gb of concurrent IO bandwidth.  Things are getting very complex even with this example. 


By the way, if you do run into a customer who doesn’t feel good about a 1TB blade configuration that is ok.  You could still propose a single HP BladeSystem enclosure with 12 BL465c G7 blades (each can handle 256GB of memory and 60GB of IO).  A nice simple elegant solution that easily handles the 3TB/300Gb requirements with room to spare and no need for any cables compared with the above UCS configuration.


Manageability is also something that you called out as important. I agree. I wanted to point out that from a manageability perspective that is an area where we get rave reviews from our customers. 


First let’s look at Virtual Connect. Our customers have been very happy with how we have drastically simplified adds, moves and changes in the datacenter via Virtual Connect.  In the past every change would require multiple people to be involved to update servers, LANs, SANs, etc. .  Virtual Connect profiles simplify things dramatically by making server personalities moveable. As a result it is simple for one administrator to move a VC profile from one blade to another blade thus logically moving that server from one place to another.   For instance, let’s say you have an Oracle database installed ‘bare metal’ on a blade and you want to move that database to another blade in a different enclosure and rack in your data center.  All the administrator needs to do is move the profile to a different blade/enclosure and viola it is done. Quick easy and a one person job vs. involving many people and it scales to groups of up to 16,000 blades.


Secondly, the standard HP BladeSystem SKU comes with Insight Control management which brings unparalleled integrated management value to the customers.  A few  of the capabilities of Insight Control include:

   - deep integration with Microsoft System Center

   - deep integration with VMware VCenter

   - X2X migrations (Physical to Virtual, Virtual to Virtual, and Virtual to Physical)

   - High Availability

        - pre-failure evacuation of via triggering of VMware  vMotion (or hyper-v live migration) before a server fails

        - auto-replacement of failed blades via automatic movement of the Virtual Connect profile

   - Operating system deployment. e.g. you can click on a server or a blade and automatically deploy an OS image.

   - Remote Console e.g. remove access to graphic console, DVR like recording of console, remote media, etc

   - Remote support e.g.  remote diagnosis of problems (ie at HP HQ), auto dispatch of HP service personnel in the case of a failure, etc

   - Power Capping

   - Discovery, health monitoring

   - and more


It is hard to beat the capabilities you get with  HP BladeSystem.



ken.henault ‎07-28-2010 03:46 AM - edited ‎07-28-2010 04:09 AM



I hope you find time to have fun in Hawaii, and good luck with your presentation.


 From my point of view it's Cisco that requires extra hardware and licensing.  UCS requires a pair of external UCS61xx  and cables and licensing if you want to use all the ports.  With c-Class and Virtual Connect everything is in the enclosure and Virtual Connect doen't have any additional licensing requirements.  I have a real hard time seeing how you could think Cisco has an advantage on these two points.  Looking at the rack or Bill Of Materials, it will be readilly apparent that it's UCS that has extra components.


I didn't want to make the point, but I agree with your assessment that the way most customers are likely to configure UCS, 80 servers is the upper limit per management domain.  While that is slightly better than the 64 servers managed in a Virtual Connect domain, I don't think 64 vs 80 is a huge differentiator.


I come back to the point I made at the end of my last post, while UCS is different from c-Class, different does not equal better.



AKA Bladeguy


BTW, Virtual Connect can easilly be configured with all uplinks active.  Look at The Virtual Connect Cookbook scenario 3:2 for just one example:


ken.henault | ‎07-28-2010 04:06 AM



How's your summer going down in Hotlanta.  I hope you've had time to get out to the lake and keep cool.


You might like to think of UCS as equivilent to VCEM and SIM, but it doesn't even come close.  Anyone who has even the most basic knowledge of all these products would find that laughable.


When it comes to uplink count, you might be able to establish physical connectivity for servers with 4 uplinks, we all know that won't provide enough bandwidth for 112 servers.  When you add enough uplinks to do real work, you'll find the port count's to be a lot closer.  The differences of a couple of port counts here or there aren't going to make the difference for most IT shops.  What's going to matter is what it takes to build a complete solution. 


With Cisco servers nearly every enterprise IT shop will be forced into a mutlivendor server environment, because Cisco doesn't have the breadth of offerings to meet all the needs.  This means they'll have to have to have two sets of tools, two sets of processes, two sets of spare parts.  All this expense and complexity is just what most IT shops are trying to avoid. 



user | ‎09-09-2011 07:06 AM

I know this is quite an old post;

but about "physical/virtual server management and advanced power management not found in UCS" - what was meant by advanced power management?

As far as i know, ucs manager has configurable power policies.

Besides, it's a different scale product, and it is not a seret that UCS requires BMC solution on top.


Might not understand it clearly - am not deeply in a subject yet

Showing results for 
Search instead for 
Do you mean 
About the Author

Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.