Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

When deploying VMware’s vSphere on HP BladeSystem, always follow the recipe & HP’s best practices

Guest blog written by Malcolm Ferguson, Senior Solution Architect, HP Enterprise Group

 

There is no doubt that a large portion of VMware’s vSphere (ESX) around the world runs on HP ProLiant, especially HP BladeSystem with Virtual Connect, the leading blade platform.  However, that does not mean all of these installations are being done the ideal way, following our Best Practices.  Here is a helpful guide to ensure you are successful with your deployment.

 

 

First, the HP ESXi images should always be used for an HP ProLiant Server or Blade.  All vendors have their own images for a reason, ensuring the right base drivers and agents are included.  Never use a generic image when you have a customized image ready to go from HP.  Access the Software Delivery Repository site http://vibsdepot.hp.com/and select “HP customized VMware image downloads”.

 

Image 1.jpg

 

 

Second, under the same site, always review the “HP ProLiant server and option firmware and driver support recipe (Current).”  This will ensure the version of ESXi you are deploying has been thoroughly tested for hardware compatibility, helping avoid any potential problems.  It is critical to have the right drivers and firmware matched with the specific version of ESX, no matter what platform you are using.  When you consider these blades can handle dozens of virtual servers each and potentially hundreds for a single enclosure, it is worth the verification.

 

 

Third, after you have the right firmware and drivers identified, follow the HP FlexFabric and Fibre Channel Cookbooks to ensure your Virtual Connect setup is following our best practices.  The Cookbooks have details like the ideal way to setup Virtual Connect profiles and uplinks and what code level your SAN switches need to be set at along with their port settings:

 

     HP Virtual Connect FlexFabric Cookbook

          Note:  Scenario 5 is best for Active-Passive Flex-Fabric uplinks using vSphere and works well with HP OneView 1.0, which supports Active-Passive only.  A future version of HP OneView will support Active-Active uplinks, which is detailed in Scenario 9.

 

     HP Virtual Connect Fibre Channel Networking Cookbook

 

Another great resource specific to network related best practices for Virtual Connect is:  http://hongjunma.wordpress.com/
           Note:  One of the most popular posts is on how best to wire Virtual Connect with Cisco Nexus switches

 

Now, of course, we also cover how to wire to HP switches, which, if purchased, can help pay for your blades.  And here is another great Virtual Connect to Nexus integration guide, as well as a nice guide on how traffic flows through Virtual Connect.

 

These Cookbooks are excellent ways to ensure your enclosure and the upstream infrastructure they wire to are configured for success.

 

 

Fourth, when deploying vSphere clusters on HP’s BladeSystem, never stack enclosures and always stretch your clusters across enclosures, treating enclosures like big rack-mount servers.

 

The image below is a good example of this, showing 4 enclosures, each with 6 active blades, providing a 24 node ESX cluster (across them all).  This example customer was advised to retain enough capacity to allow 3 out of the 4 enclosures to handle the full load, in case a single enclosure has a problem as a result of a problematic upgrade. 

 

Yes, blades are a great way to reduce space, power, cooling and expensive LAN and SAN port costs (especially with Virtual Connect) and our enclosures have fully redundant components like Virtual Connect, Onboard Administrators, Fans and Power Supplies; however, blade enclosures from all vendors can be more difficult to manage compared to rack-mount servers, due to the complexity of firmware management across all the components within the enclosure.  So planning is important before deploying your production VMware environment (hosting hundreds of virtual servers and production applications) across blades.  Testing in the lab is never the same as a production upgrade.  Here is the best way to deploy vSphere on HP Blades, ensuring any future upgrades can be done risk-free:

 

  1.  Again, always spread the cluster across 2 or more enclosures, leaving capacity for a full enclosure outage.  This adds the same respect you give to all the other components in your data center (network switches, san switches, SAN controllers, load-balancers, firewalls, power redundancy, etc…) to the platform that actually holds all the running applications, so it makes sense.  Don’t get cheap here.  And always try to dedicated enclosures to certain workloads, like virtualization vs comingling which can cause different departments to hold up upgrades.
  2. Try to keep your network flat, to help increase East-West network traffic performance.  Network vendors selling you multi-hop networks are only trying to sell more ports.  Flatter is faster and cheaper!
    Image 3.jpg
  3. Avoid any linking of enclosure’s data planes, like stacking links.  This just increases the potential fault domain and mixes data and control planes (which is a very bad thing for any platform in the data center).  With a fast, flat 10G LAN, stacking links are not necessary.  And this is why with SDN the architecture is very strict on keeping data and control planes separate, just like how vCenter manages the ESX servers.  If vCenter dies, it does not bring down the cluster.  Each HP blade enclosure is self-sufficient, in its own domain.
  4. When applying any changes to your cluster (new firmware, drivers, etc…), which is a necessary evil for all blade vendors, designate the first enclosure that will receive the change, place the blades in it into maintenance mode.  Always use HP tools like our Smart Update Manger (which is integrated into HP OneView) to properly assess and upgrade enclosure firmware and drivers.  Once the production VMs have moved to the remaining enclosures, make your planned changes, introduce a test workload on one of the blade in the upgraded chassis, verifying all production connectivity and functionality.  Then re-introduce production VMs back to the nodes in the upgraded chassis.  Finally, repeat the process on the next enclosure, rolling through the upgrade.  You have now avoided any risk of a massive outage, across all of your enclosures, just like you would upgrade rack-mount servers in a cluster.  Image 4.jpg

Image 5.jpg

 

Believe it or not, some blade vendors require you physically wire all of their enclosures data and management links to a pair of control plane switches (in-band), where management and critical firmware upgrades occur.  HP’s enclosures have redundant managers in each enclosure called Onboard Administrators, with their own out-of-band management links to an out-of-band enclosure manager like HP Insight Control and now HP OneView.  Again, these out of band managers are just like vCenter for ESX.  Imagine if all your ESX hosts had all their LAN, SAN and management links physically wired to the vCenter server and every time you upgraded vSphere you did it to your vCenter server, risking a full cluster outage.  That is how our competition does things.  You can now see how the HP architecture and upgrade approach has much less risk, which our customers and their critical applications deserve.

And speaking of vCenter, if you are buying our Insight Control blade licenses (which include iLO Advanced), be sure to take advantage of our Insight Control for vCenter integration.  HP ProLiant has been a legendary platform in the hardware pre-failure alert space.  We have all kinds of components that we can warn customers on well before they actually crash, like the hard drives the ESX operating system is running on.  Imagine 50 production virtual servers on a blade with a pair of drives in a HW Raid mirror.  If one of the 2 drivers were to fail, you would be skating on thin ice until the drive gets removed.  With our Insight Control for vCenter plug-in, you can have vCenter react to the drive pre-failure or failure alert and have the blade put in Maintenance Mode, while you are asleep at home.  And look at the visibility you get between your Virtual Switch, Virtual Connect and your LAN and SAN switches:

Image 9.jpg

 

 

And finally, if you are using HP’s Intelligent PDUs, here is how to properly wire up 3 fully loaded c7000s to the single phase ports between the enclosures and the PDU.  The PDUs are fed by 60 amp connections.
last image.gif

 

HP One View Links:

HP OneView Product Page

Getting Started with HP OneView video

PowerShell driving HP OneView video

HP OneView vs UCS Manager video

 

HP SPP (Service Pack for ProLiant) Links:
HP SPP product page
(download link can be found on this page)

 

HP SUM (Smart Update Manager) Links:
HP SUM product page
(download link can be found on this page)

 

Intelligent Provisioning and Scripting Toolkit Kit Links:
Intelligent Provisioning product page

HP Scripting Toolkit (STK)
HP iLO resources page

 

I hope you have found this post helpful.  More to come!

Malcom

Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the community guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author
Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
Featured


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.