Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

Displaying articles for: April 2010

What are you managing and why?

There are many parameters that can be set and tweaked in the data center, but does it make sense to micro-manage our systems?  Think of the number of settings available in the Windows Registry or Linux configuration files, storage arrays or switches.  Even something as simple as a NIC can have a dozen or more configuration parameters.  (Remember when we HAD to manage IRQs?)  It’s important to understand what’s being managed and why.


Sometimes it is valuable, even important to manage some of these settings.  Thinking back to the days of 10/100 Ethernet, auto-negotiate for link speed and duplex was very unreliable, and often caused network issues.  Many shops instituted a policy that all NICs and switch ports have hard coded link speed and duplex to ensure reliable connections.  Then comes gigabit Ethernet, and the auto-negotiate issues were resolved.  Some shops kept the hard coded link policy in place, but now it causes issues when someone forgets to set a device.  So when automatic settings were unreliable it was important to manage the settings for reliability.  Now that automatic settings are reliable, manually configuration creates more work, and creates more issues than automatic settings would.  http://tinyurl.com/y42sqcm


Other times manual settings are used to manage scarce resources, network QoS is a good example.  QoS aims to improve the efficiency of a network.  When a network has adequate bandwidth, QOS is not required or even useful for most applications.  When a network is saturated, QoS won’t be effective, so it’s only useful when a network is heavily loaded, but not approaching saturation.   It’s typically the high-cost network providers that advocate for QoS to make use of scarce and expensive resources.  By deploying network designs with flatter topologies and bandwidth that is cost-effective and plentiful, you can eliminate the need for QoS with most applications.


There are many other examples of items that can be managed but shouldn’t.  As an industry we are moving away from micro-managing and towards a utility model. (You wouldn’t want to manage the volts and amps used by your toaster, or calibrate for the thickness of the bread would you?)  If we design systems to work efficiently with default settings wherever possible, we reduce opportunities for error, management workload and cost.


Here are a few things to keep in mind:


·        Keep track of what you’re managing and why


·        When something no longer has to be managed, stop


·        Design systems that minimize the requirement for custom settings

Client Virtualization: There's No Substitute for Experience

I love to grill.  More to the point, I love to grill steaks.  (Hey, I’m from Texas – it’s a requirement!)  Over time, I’ve developed my own special system, which include a specific seasoning mix, just the right temperature on the grill, and the perfect cut of meat, among other things.  And I can’t describe the satisfaction I get from taking a bite of a steak I’ve grilled to my liking, with just the right flavor.  Mmmmmm….


(Hungry anyone?  Please take this moment to grab a snack, then come right back.)


Looking back, there was a journey I had to take to getting this process right.  As with most things I get excited about, I started by consuming as much data as I could get on the subject.  I watched television programs about grilling steaks.  I read articles online.  I conferred with “grillmasters” I trusted on their tips and tricks.  When I felt I was ready, I took a stab at it (no pun intended).  And, wow -  let me tell you – that first steak?


Meh. It wasn’t that great.


It was a bit too charred for my taste.  I later learned I’d left the grill too hot the entire time.  So the next time I grilled, I made an adjustment with the temperature.  But this time, it was too dry because I left it on the grill too long.  I fixed this on my subsequent attempt, but felt the seasoning could have been better.  I learned that certain cuts of meat delivered the right flavoring I was seeking.  Then I experimented with various levels of thickness and hot / cool spots on the grill.  Every chance I had, I honed my skill with research, time, and trial and error.


Eventually, I was able to grill a steak that was perfectly tuned to my liking.   And now, grilling that “perfect” steak – well,  it’s almost second nature.  There’s something to be said for all the experiences and lessons that led me to this point.  In fact, my process now is more efficient that it used to be.  I can quickly spot if there’s a problem with the process because I’ve done it so many times.  And, my family no longer feels the need to keep the pizza delivery number nearby when I grill.


What does all this have to do with client virtualization? 


There’s something to be said for good old fashioned, roll-your-sleeves-up, hard work to hone your skills.    Sure, HP offers the best end to end client virtualization product portfolio, but we also offer the best client virtualization services portfolio as well.   HP Client Virtualization Services provide you with the best means to meet all your client virtualization needs, born from experience and steeped in expertise. We simplify the client virtualization implementation process, by designing the right solution for each customer.  And we can do the same for you. 


Learn more about HP Client Virtualizaiton Services here, including news on an announcement made last week about our exciting Client Infrastructure Services Portfolio.  It's a comprehensive set of services and solutions, built to help organizations extend the benefits from converged infrastructure to the desktop. 


Whether it’s implementing a sophisticated VDI environment, or making a killer ribeye on an open flame, there’s just no substitute for experience.  OK, that's it - I need to fire up the grill.  I knew this would happen...


(Before I sign off, check out a great blog here from our partners at Citrix, highlighting the value that HP Thin Clients can add as you begin planning for Windows 7 migration.)


Until next time,


Joseph George
HP Client Virtualization Team
www.hp.com/go/clientvirtualization

Join us April 27th for the next era of mission-critical computing

HP is holding a global virtual event beginning April 27, 2010 that will introduce their vision and major announcements regarding their products and solutions for the next era of mission-critical computing.  The event will feature the announcement overview, demos, papers, info from the live event in Germany and more.  Participants will also have the opportunity to share reactions to the announcement and ask questions through live expert chat sessions.


HP promises you can learn how you can:


       •  Scale service-level agreements dynamically to meet business needs
       •  Capture untapped resources and react faster to new opportunities
       •  Reduce complexity
       •  Lay the foundation for a converged infrastructure to accelerate business outcomes


Registration is available at: www.hp.com/go/witness


You can also follow updates from the event on twitter with @HPIntegrity   #HPIntegrity


 

Behind the Scenes at the InfoWorld Blade Shoot-Out

When the sun set over Waikiki, HP BladeSystem stood as the victor of InfoWorld 2010 Hawaii Blade Shoot-Out.  Editor Paul Venezia blogged about HP's gear sliding off a truck, but other behind-the-scenes pitfalls meant the world's #1 blade architecture nearly missed the Shoot-Out entirely.
 
Misunderstandings about the test led us to initially decline the event, but by mid-January we'd signed on. Paul's rules were broad: Bring a config geared toward "virtualization readiness" that included at least 4 blades and either fibre or iSCSI shared storage.  Paul also gave us copy of the tests he would run, which let vendors select and tune their configurations.  Each vendor would get a 2- or 3-day timeslot on-site in Hawaii for Paul to run the tests himself, plus play around with the system's management features.  HP was scheduled for the first week of March.


In late January we got the OK to bring pre-released equipment. Luckily for Paul, Dell, IBM, and HP all brought similar 2-socket server blades with then unannounced Intel Xeon 5670 processors ("Westmere").  We scrambled to come up with the CPUs themselves; at the time, HP's limited stocks were all in use to support Intel's March announcement.


HP BladeSystem at InfoWorld Blade Shoot-OutHP's final config: One c7000 enclosure, four ProLiant BL460c G6 server blades with VMWare ESX and using 6-core Xeon processors and 8GB LVDIMMs. Two additional BL460c G6's with StorageWorks SB40c storage blades for shared storage; a Virtual Connect Flex-10 module, and a 4Gb fibre switch. (We also had a 1U KVM console and an external MSA2000 storage array just in case, but ended up not using them.)


To show off some power-reducing technology, we used solid state drives in the storage blades, and low-voltage memory in the server nodes.  HP recently added these Samsung-made "Green" DDR3 DIMMs that use 2Gb-based DRAMS built with 40nm technology. LV DIMMs can run at 1.35 volts (versus the normal 1.5 volts), so that they "ditch the unnecessary energy drain" (as Samsung's Sylvie Kadivar put it recently).


Our pre-built system left Houston three days before I did, but it still wasn't there when I landed in Honolulu Sunday afternoon. We had inadvertently put the enclosure into an extra-large Keal case (a hard-walled shipping container) which was too tall to fit in some aircraft. It apparently didn't fit the first cargo flight.  Or the second one.  Or the third one...


Sunday evening, already stressed about our missing equipment, the four of us from HP met in the home of our Hawaiian host, Brian Chee of the University of Hawaii's Advanced Network Computing Laboratory.  Our dinnertime conversation generated additional stress: We realized that I'd mis-read the lab's specs, and we'd built our c7000 enclosure with 3-phase power inputs that didn't match the lab's PDUs.  Crud.  


We nevertheless headed to the lab on Monday, where we spotted the rats-nest of cables intended to connect power meters to the equipment.  Since our servers still hadn't arrived, two of the HP guys fetched parts from a nearby Home Depot, then built new junction boxes that would both handle the "plug conversion" to the power whips, plus provide permanent (and much safer) test points for power measurements. 


Meanwhile, we let Paul get a true remote management experience on BladeSystem.   I VPN'd into HP's corporate network, and pointed a browser to the Onboard Administrator of an enclosure back in a Houston lab.   Even with Firefox (Paul's choice of browser), controlling an enclosure that's 3000 miles distant is still simple.


Moments Before Disaster...Mid-morning on day #2, Paul got a cell call from the lost delivery truck driver.  After chasing him down on foot, we hauled the shipping case on to the truck's hydraulic lift...which suddenly lurched under the heavy weight, spilling the wheels off of the side, and nearly sending the whole thing crashing onto the ground.  It still took a nasty jolt.


 


Some pushing and shoving got the gear to the Geophysics building's piston-driven, hydraulic elevator, then up to the 5th floor.  (I suppose I wouldn't want to be on that elevator when the "Low Oil" light turns on!)


 


We unpacked and powered up the chassis, but immediately noticed a health warning light on one blades.  We quickly spotted the problem; a DIMM had popped partway out.  Perhaps not coincidently, it was the blade that took the greatest shock when the shipping container had slipped from the lift.


With everything running (whew), Paul left the lab for his "control station", an Ubuntu-powered notebook in an adjoining room.  Just as he sat down to start deploying CentOS images to some of the blades...wham, internet access for the whole campus blinked out.  It didn't affect the testing itself, but it caused other network problems in the lab. 


An hour later, those problems were solved, and performance tests were underway.  It went quick.  Next, some network bandwidth tests.   Paul even found some the time to run some timed tests to evaluate Intel's new AES-NI instructions, using timed test with some OpenSSL tools.


Day #3 brought us a new problem.  HP's Onboard Administrator records actual power use, but Paul wanted independent confirmation of the numbers.  (Hence, the power meters and junction box test-points.) But the lab's meters couldn't handle redundant three-phase connections.  An hour of reconfiguration and recalculation later, we found a way to corroborate measurements.  (In the end, I don't think Paul published power numbers, though he may have factored them into his ratings.)


We rapidly re-packed the equipment at midday on day #3 so that IBM could move into the lab. Paul was already drafting  his article as we said "Aloha", and headed for the beach -- err, I mean, back to the office.


 Paul Venezia (left) and Brian Chee with the HP BladeSystem c7000.


 View toward the mountains from the rooftop "balcony" behind Brian's lab



 

Planning for Client Virtualization: Know Thyself

Have you noticed how difficult it is to get information on client virtualization?


Exactly – it’s not difficult at all.


Vendors are talking about it.  Analysts are devoting research to it.  IT / Virtualization events and conferences dedicate a good portion of their agenda to it. Do a quick web search on client virtualization – it brings up a number of web sites, articles, webinars, community forums, technical sessions, blogs, and tweets on the topic. 


(Anyone had a chatroulette about client virtualization yet?  Probably not.)


As with many things that are shiny and seemingly new (client virtualization has actually been around for a while), it’s important to understand what goes into the real world planning, implementing, and managing of it.


Here are a few starting points.


First and foremost, you’ll need to learn about your user community.  There are varying needs when it comes to applications, customization, and horsepower, and many times you may not be as familiar with your desktops as you’d like. Be sure to start with an assessment to understand what your user landscape looks like.  In fact, you may discover that some users may be better off staying on a traditional desktop!


Educate yourself on the client virtualization technologies available to you. There are a number of options available, including server based computing, VDI, and dedicated desktops like workstation blades. Various forms of client virtualization have unique values that they bring to the table, and in many circumstances, strengths are complimentary. Like chocolate and peanut butter, many customers discover their best bet is to have a mix of technologies. 


Help your users prepare for the transition by setting the proper expectations with them.  The better you prepare your user community for this next stage in desktop evolution, the easier the migration path to client virtualization will be.  The overall experience can be identical to a traditional desktop depending on the technology selected, but some things will just be different.


And this is just a start.  HP can help you go through all the details with the expertise you need to make client virtualization a reality, leveraging the best end-to-end technology and services portfolio around.  Drop a note to your HP rep today – you can learn more about the different client virtualization technologies available, work together on your first user assessment, or even have your very first client virtualization discussion.


Where are you on your client virtualization journey?  What other considerations do you find yourself accounting for as you plan for client virtualization?  Let us know - feel free to leave a comment below. 


Until next time,


Joseph George
HP Client Virtualization Team
www.hp.com/go/clientvirtualization

Customizing BladeSystem Matrix Allocation Rules Engine for Multi-tenancy Solutions

Early this week I was in a couple of halo meeting sessions with folks in our Bangalore India location, taking about "the next big thing". It reminded me that the last thing we worked on - exposing an extensible rules engine into the allocation and placement - was part of the BladeSystem Matrix 6.0 release. I wanted to talk a little about that capability today and give an example of how it can be used in deployments involving multi-tenancy.


BladeSystem Matrix Allocation and Placement Rules











Allocation and placement has always been a key function of BladeSystem Matrix.


When multi-tier service designs (represented my templates) are submitted for instantiation, it is the allocation and placement function that looks at the requirements for the service in terms of individual element specifications, desired service topology and lease period and them binds these to the available resources in the environment based on their characteristics and capacity, availability calendar, and physical topology.


In BladeSystem Matrtix 6.0, this allocation process can be customized by an extensible rules engine. Overall there are 18 different allocation rule sets that can be extended as shown in figure 1. The policy.xml file specifies which of the rule sets should be used. These are further explained the in the Insight Orchestration User Guide on page 48.


 



 
Figure 1 Extensible Rules sets




 


Mutl-tenancy Example











A very common use case I hear from customers is the desire to have a common design for a service but to have some aspects of the resource binding to be determined by the identity of the service owner.


In this scenario, we consider a provider who is servicing two competitors like Marriott and Hilton hotels but wants to put offer a common service template in the catalog. The desire is that when Marriott deploy a new instance of the service, that service instance would connect to Marriott-Corporate network segment. However, if Hilton deploy the service, then their service instance would connect to the Hilton Corporate network segment.




Figure 2. Pre-configured networks for the two competing  corporations




Setting up your Service Template











Here we show a portion of a simple single server template as an illustrative example. This is a multi-homed server with



  • 1. a connection to the corporate network. The network is named "@corporate". Later on in the rule engine we will look for the "@" sign in the name to trigger special rules processing

  • 2. a connection to an internal network private to the service "net1".



 


Figure 3 Sample Multi-tenancy configuration




 Adding the processing Rule


The rules engine is based on Drools. The rules are written expressed in Java with a Drools rule semantic wrapper. I'll give you a boiler plate wrapper to get you started below. This rule and the Java function are appended to the SubnetCheck.drl file. I'm going to show a very simple example, but can imagine that the creative community will quickly come up with some more sophisticated implementations. In figure 4, I show a simple rule. The rules processing is invoked to refine the candidate networks for allocation to the new service instance. The rule runs for each network (LogicalNetwork) specified in the template, and for each candidate network in the environment. The purpose of the rule processing is to discard candidates that "don't fit".


This snippet basically extracts the information about the subnet specification in the template (the $logicalSubnet), the candidate list of networks ($subnet) from the context ($pVO). It invokes a function customerSpecificSubnetCriteriaCheck to perform the actual processing. 


rule "CustomerSpecificSubnetCriteria"
       when
               $pVO : PolicyExecutionVO( );
               $resLst : List();
               $logicalSubnet : LogicalSubnet();
               $subnet : Subnet() from $resLst;
              eval(customerSpecificSubnetCriteriaCheck($logicalSubnet, $subnet, $pVO)); 
       then
             
              // match processing is embedded in customerSpecificSubnetCriteriaCheck
              // $pVO.match($subnet, HPIOMessage.get(HPIOBundleKey.ALLOCATION_CRITERIA_CUSTOM, "CustomerSpecificSubnetCriteriaCheck succeeded"));
end


Figure 4. Boiler plate rule example


The function code is placed in the drl file after the rule statement. Here is the snippet


function boolean customerSpecificSubnetCriteriaCheck(
                                         LogicalSubnet logicalSubnet,
                                         Subnet subnet,
                                         PolicyExecutionVO pVO) {

       AllocationEntry ae = pVO.getAllocationEntry();
      
       InfrastructureService service = ae.getInfrastructureService();

       String serviceName = service.getName();
       String owner = service.getOwner().substring(owner.lastIndexOf("\\")+1); // strip domain
       String lsName = logicalSubnet.getName();
       String psName = subnet.getName();

       System.out.println("Service: " + serviceName + " Owner: " + owner);
       System.out.println("LogicalSubnet: " + lsName + "Physical Net: " + psName);
      
       boolean match;
      
       if (lsName.beginsWith("@")) {
              String key = lsName.substring(1); // strip off @
              // March @key to networks with Id "owner-key"
              match = psName.equalsIgnoreCase(owner+"-"+key);
       } else {
              // regular network. Could include additional security checks here.
              match = true;
       }
       if (match) {
              pVO.match(subnet, HPIOMessage.get(HPIOBundleKey.ALLOCATION_CRITERIA_CUSTOM,
                                                                                  "CustomerSpecificSubnetCriteriaCheck succeeded"));
       } else {
              pVO.doesNotMatch(subnet, HPIOMessage.get(HPIOBundleKey.ALLOCATION_CRITERIA_CUSTOM,
                                                                                                      "Could not find customer specific subnet"));
       }
       System.out.println("MATCH="+match);
       return match;
}


Figure 5. Rule processing example


The function starts by getting the information on the InfrastructureService being provisioned.  This contains details of the entire template being provisioned and can be used for additional context aware processing. From this object we extract the service owner name (stripping off the windows domain), as well as the name of the service. It is also possible to extract information such as the "notes" that are specified for the service where additional information may also be encoded by the requestor.  From the LogicalNetwork object we extract the name (ie "@Corporate" or "net1") in lsName. Similarly we extract the physical network name into psName.


I've included some debug lines using System.out.println . These show up in C:\Program Files\HP\Insight Orchestration\logs\hpio.log.


The purpose of this code is to return "FALSE" if the physical network is not a match candidate for the LogicalNetwork specified in the template, otherwise return "TRUE". The rules processing logic requires that if the rule allows an element to be a selection candidate, then the function pVO.match must be invoked for that element. If the element is to be eliminated from consideration, then pVO.doesNotMatch() needs to be invoked listing a reason for the exclusion. As a matter of coding style, you can either include the calls to both these routines in your custom function, OR you can just include the pVO.doesNotMatch() code in the function, and put the pVO.match() innocation in the body of the rule.


For logical networks not beginning with a "@" we just want to return TRUE and let the normal selection rules apply. For networks beginning with "@" we will be more selective, excluding candidates unless they match a specific pattern. For a logical network specified in the template with name of the form "@key" we want it to match against physical networks named "owner-key", where owner is the id of the requesting user. The logic looks for a lsName beginning with "@" and then strips off the "@" to create the key. We then test the physical server name to see if it matches the owner-key pattern.


Configuring the Code


To configure the use of the rules processing, edit C:\Program Files\HP\Insight Orchestration\conf\policy\policy.xml As shown in Figure 6. Once you have updated the policy.xml file you will need to restart the Insight Orchestration service.


<policy enabled="true" name="SubnetPolicyCheck.applyFitting">
    <policy-rule-file>SubnetCheck.drl</policy-rule-file>
    <policy-class-name>policy-class-name</policy-class-name>
</policy>


 Figure 6. Configuring rules processing


Provisioning the Service











Now we are ready to deploy the service. Logging on as user Marriott, I create the service using the template shown earlier in Figure 2. Once the provisioning completes, I can look at the service details page for more information about the service. Select the network named "@Corporate" and then click on the resource details tab. From there I see that the network has indeed been mapped to the Marriott-Corporate network by the customer allocation rules processing.



 


Figure 3 Provisioned Service details




Conclusion


The rules based processing capabilities in BladeSystem Matrix enables simple realization of customized resource allocation processing that can be used to simplify and extend Matrix template deployment. I hope this example helps others to quickly understand the capabilities enabled through this powerful engine and gives a "Quick Start" to writing your own custom rules. If you have cool examples of rule extensions you have implemented, I'd be interested in hearing about them.


Thanks to Manjunatha Chinnaswamynaika for helping me to create this example.


Happy coding :smileyhappy:


 

Search
Follow Us


About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I work within EMEA ISS Central team and a launch manager for new products and general communications manager for EMEA ISS specific information.
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • Network industry experience for more than 20 years - Data Center, Voice over IP, security, remote access, routing, switching and wireless, with companies such as HP, Cisco, Juniper Networks and Novell.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation