Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

Customizing BladeSystem Matrix Allocation Rules Engine for Multi-tenancy Solutions

Early this week I was in a couple of halo meeting sessions with folks in our Bangalore India location, taking about "the next big thing". It reminded me that the last thing we worked on - exposing an extensible rules engine into the allocation and placement - was part of the BladeSystem Matrix 6.0 release. I wanted to talk a little about that capability today and give an example of how it can be used in deployments involving multi-tenancy.

BladeSystem Matrix Allocation and Placement Rules

Allocation and placement has always been a key function of BladeSystem Matrix.

When multi-tier service designs (represented my templates) are submitted for instantiation, it is the allocation and placement function that looks at the requirements for the service in terms of individual element specifications, desired service topology and lease period and them binds these to the available resources in the environment based on their characteristics and capacity, availability calendar, and physical topology.

In BladeSystem Matrtix 6.0, this allocation process can be customized by an extensible rules engine. Overall there are 18 different allocation rule sets that can be extended as shown in figure 1. The policy.xml file specifies which of the rule sets should be used. These are further explained the in the Insight Orchestration User Guide on page 48.


Figure 1 Extensible Rules sets


Mutl-tenancy Example

A very common use case I hear from customers is the desire to have a common design for a service but to have some aspects of the resource binding to be determined by the identity of the service owner.

In this scenario, we consider a provider who is servicing two competitors like Marriott and Hilton hotels but wants to put offer a common service template in the catalog. The desire is that when Marriott deploy a new instance of the service, that service instance would connect to Marriott-Corporate network segment. However, if Hilton deploy the service, then their service instance would connect to the Hilton Corporate network segment.

Figure 2. Pre-configured networks for the two competing  corporations

Setting up your Service Template

Here we show a portion of a simple single server template as an illustrative example. This is a multi-homed server with

  • 1. a connection to the corporate network. The network is named "@corporate". Later on in the rule engine we will look for the "@" sign in the name to trigger special rules processing

  • 2. a connection to an internal network private to the service "net1".


Figure 3 Sample Multi-tenancy configuration

 Adding the processing Rule

The rules engine is based on Drools. The rules are written expressed in Java with a Drools rule semantic wrapper. I'll give you a boiler plate wrapper to get you started below. This rule and the Java function are appended to the SubnetCheck.drl file. I'm going to show a very simple example, but can imagine that the creative community will quickly come up with some more sophisticated implementations. In figure 4, I show a simple rule. The rules processing is invoked to refine the candidate networks for allocation to the new service instance. The rule runs for each network (LogicalNetwork) specified in the template, and for each candidate network in the environment. The purpose of the rule processing is to discard candidates that "don't fit".

This snippet basically extracts the information about the subnet specification in the template (the $logicalSubnet), the candidate list of networks ($subnet) from the context ($pVO). It invokes a function customerSpecificSubnetCriteriaCheck to perform the actual processing. 

rule "CustomerSpecificSubnetCriteria"
               $pVO : PolicyExecutionVO( );
               $resLst : List();
               $logicalSubnet : LogicalSubnet();
               $subnet : Subnet() from $resLst;
              eval(customerSpecificSubnetCriteriaCheck($logicalSubnet, $subnet, $pVO)); 
              // match processing is embedded in customerSpecificSubnetCriteriaCheck
              // $pVO.match($subnet, HPIOMessage.get(HPIOBundleKey.ALLOCATION_CRITERIA_CUSTOM, "CustomerSpecificSubnetCriteriaCheck succeeded"));

Figure 4. Boiler plate rule example

The function code is placed in the drl file after the rule statement. Here is the snippet

function boolean customerSpecificSubnetCriteriaCheck(
                                         LogicalSubnet logicalSubnet,
                                         Subnet subnet,
                                         PolicyExecutionVO pVO) {

       AllocationEntry ae = pVO.getAllocationEntry();
       InfrastructureService service = ae.getInfrastructureService();

       String serviceName = service.getName();
       String owner = service.getOwner().substring(owner.lastIndexOf("\\")+1); // strip domain
       String lsName = logicalSubnet.getName();
       String psName = subnet.getName();

       System.out.println("Service: " + serviceName + " Owner: " + owner);
       System.out.println("LogicalSubnet: " + lsName + "Physical Net: " + psName);
       boolean match;
       if (lsName.beginsWith("@")) {
              String key = lsName.substring(1); // strip off @
              // March @key to networks with Id "owner-key"
              match = psName.equalsIgnoreCase(owner+"-"+key);
       } else {
              // regular network. Could include additional security checks here.
              match = true;
       if (match) {
              pVO.match(subnet, HPIOMessage.get(HPIOBundleKey.ALLOCATION_CRITERIA_CUSTOM,
                                                                                  "CustomerSpecificSubnetCriteriaCheck succeeded"));
       } else {
              pVO.doesNotMatch(subnet, HPIOMessage.get(HPIOBundleKey.ALLOCATION_CRITERIA_CUSTOM,
                                                                                                      "Could not find customer specific subnet"));
       return match;

Figure 5. Rule processing example

The function starts by getting the information on the InfrastructureService being provisioned.  This contains details of the entire template being provisioned and can be used for additional context aware processing. From this object we extract the service owner name (stripping off the windows domain), as well as the name of the service. It is also possible to extract information such as the "notes" that are specified for the service where additional information may also be encoded by the requestor.  From the LogicalNetwork object we extract the name (ie "@Corporate" or "net1") in lsName. Similarly we extract the physical network name into psName.

I've included some debug lines using System.out.println . These show up in C:\Program Files\HP\Insight Orchestration\logs\hpio.log.

The purpose of this code is to return "FALSE" if the physical network is not a match candidate for the LogicalNetwork specified in the template, otherwise return "TRUE". The rules processing logic requires that if the rule allows an element to be a selection candidate, then the function pVO.match must be invoked for that element. If the element is to be eliminated from consideration, then pVO.doesNotMatch() needs to be invoked listing a reason for the exclusion. As a matter of coding style, you can either include the calls to both these routines in your custom function, OR you can just include the pVO.doesNotMatch() code in the function, and put the pVO.match() innocation in the body of the rule.

For logical networks not beginning with a "@" we just want to return TRUE and let the normal selection rules apply. For networks beginning with "@" we will be more selective, excluding candidates unless they match a specific pattern. For a logical network specified in the template with name of the form "@key" we want it to match against physical networks named "owner-key", where owner is the id of the requesting user. The logic looks for a lsName beginning with "@" and then strips off the "@" to create the key. We then test the physical server name to see if it matches the owner-key pattern.

Configuring the Code

To configure the use of the rules processing, edit C:\Program Files\HP\Insight Orchestration\conf\policy\policy.xml As shown in Figure 6. Once you have updated the policy.xml file you will need to restart the Insight Orchestration service.

<policy enabled="true" name="SubnetPolicyCheck.applyFitting">

 Figure 6. Configuring rules processing

Provisioning the Service

Now we are ready to deploy the service. Logging on as user Marriott, I create the service using the template shown earlier in Figure 2. Once the provisioning completes, I can look at the service details page for more information about the service. Select the network named "@Corporate" and then click on the resource details tab. From there I see that the network has indeed been mapped to the Marriott-Corporate network by the customer allocation rules processing.


Figure 3 Provisioned Service details


The rules based processing capabilities in BladeSystem Matrix enables simple realization of customized resource allocation processing that can be used to simplify and extend Matrix template deployment. I hope this example helps others to quickly understand the capabilities enabled through this powerful engine and gives a "Quick Start" to writing your own custom rules. If you have cool examples of rule extensions you have implemented, I'd be interested in hearing about them.

Thanks to Manjunatha Chinnaswamynaika for helping me to create this example.

Happy coding :smileyhappy:


Customizing the Matrix Self Service Portal

I was playing around with the BladeSystem Matrix creating some new demo videos, and I thought why not dig into the portal skinning features to create a custom look for my system.

The skinning feature is intended to let companies personalize the portal with their logos and product names, replacing the standard HP skin that ships with the product. In this example, my customer is the fictitious Acme Corporation.

Here are the steps I went through:

  1. Copy C:\Program Files\HP\Insight Orchestration\webapps\hpio-portal.war to my desktop

  2. Rename to hpio-portal.zip

  3. Copy hpio-portal.zip to hpio-portal.orig.zip

  4. Unzip hpio-portal.zip

  5. Browse to the hpio-portal/skins directory, and create a new folder "acme". You should see the following:

  6. Copy your new images into the acme directory. The image names, format, and recommended sizes are shown in the table below. I found there is minor flexibility in the sizes of images you create, but in general things look a little nicer if you stick to the sizes shown in the table.



    Recommended Size in pixels
    (w x h)



    90 x 72



    408 x 287



    42 x 26



    300 x 40

  7. Edit the skinConfig.xml file in the skins directory. Here's my updated content:

         <property><key> personalizeMode                 </key><value>insight</value></property>
         <property><key> skinName                        </key><value>acme</value></property>
         <property><key> brandName                       </key><value>Acme Corporation</value></property>
         <property><key> portalName                      </key><value>Developer Self Service</value></property>
         <property><key> bsaURL                               </key><value></value></property>

  8. rezip hpio-portal directory
    Once you have re-zipped hpio-portal, you might want to open it and check that the top component of the zip file is the contents of the hpio-portal folder, and not a folder called "hpio-portal". Windows XP zip default behavior is to create that top level folder in the zip archive. Compare with the contents of hpio-portal.orig.zip to make sure you get this correct. Otherwise your portal won't restart correctly in the later steps.

  9. rename hpio-portal.zip to myportal.war file

  10. Stop the Insight Orchestration service

  11. Rename C:\Program Files\HP\Insight Orchestration\webapps\hpio-portal.war to hpio-portal.war.orig

  12. Copy myportal.war to C:\Program Files\HP\Insight Orchestration\webapps\hpio-portal.war

  13. Delete the folder in the C:\Program Files\HP\Insight Orchestration\work directory with a name starting with Jetty and containing the name hpio.portal.war

  14. Start the Insight Orchestration service

 Updated Portal

Here's my updated login screen to the self service portal, and a zoom in on the updated window bar after login is complete.


ioconfig Command

The ioconfig command in the C:\Program Files\HP\Insight Orchestration\bin folder is a useful utility that lets you switch between skins, for example: ioconfig --skin insight will change back to original skin. See ioconfig --help for more information on this command and options.

Send your examples

I hope this quick overview is helpful to getting you started. Send me examples of your self-service portal customizations!

HP's Technology Contribution to the DMTF Cloud Incubator

I attended the DMTF Cloud incubator last week, and had a opportunity to present to the assembled group about the HP Technology submission to the DMTF I had made a few days before. I'd like to use this blog as an opportunity to talk about this publically.

DMTF Use Cases

The use cases being considered by the DMTF cloud incubator are shown in Figure 1 below, on which I have overlaid the scope of the submitted API.  The focus of the technology submission and my presentation covered APIs in the use case areas of templates, instance and a brief mention about charge back. In the HP products I work on the capabilities associated with administration and user management some of which are listed in the other use cases areas in Figure 1 are typically exposed via the product administrator portal, and the underlying APIs weren’t exposed as part of the submission.

Infrastructure Needs More than just VMs

Many of the published Cloud APIs to date have been at their heart focused on offering virtual machines as a service, which some have commented as being overly restrictive. Moreover, many cloud providers are based on a single hypervisor technology. What this means is if you are using cloud for overflow capacity, then the VMs you create in house may need some transformation to run in the external cloud. Or even worse, if you have 2 cloud providers, you may need a different flavor of VM for each. The DMTF OVF standard isn’t a panacea here, given the virtual machine payload is vendor specific, so if there are n vendors in the market place, someone has to write n (n-1) translators. To date, not all hypervisor vendors even support OVF. The offering I described to the DMTF, and the APIs we provided describe infrastructure services that may contain VMs from different vendors (currently both vmWare ESX and Microsoft HyperV supported), as well as servers not running hypervisors at all.

Three different types of Cloud offering are often described:

  1. Infrastructure as a Service (IaaS)
    This is the focus of the DMTF Cloud incubator.

  2. Platform as a Service (PaaS) – think Google apps

  3. Software as a Service (SaaS) – salesforce.com or gmail

Supporting hypervisor-less configuration is important if you are going to be able provision a wide range of PaaS or SaaS offerings on top of your IaaS infrastructure. For example, perhaps in your Cloud you want to install a database service that will be used by multiple business applications, or may be an email service which has a heavy referenced database component.  In these cases being able to specify and consume one or more 8 core servers with high storage I/O bandwidth would be appropriate. The API I discussed goes beyond just virtual and bare metal servers. It lets you define a service that specifies requirements for server (physical or virtual), storage (whether for example it’s a vmdk or a dual path SAN LUN to a RAID5 volume), network or software, publish that definition in a catalog, and then perform lifecycle operations on those catalog definitions.

Enabling a self-scaling service


Another key element is the ability to define a homogeneous group of servers. If you have a service that uses (for example) 4 web servers, you can define them as a group:  defining the pattern for host naming, the design range of servers in the group (eg 1-6), the initial number to deploy (e.g. 4) and the common characteristics (e.g. 2 core, 8 GB of memory, physical or virtual).  The API includes calls that allow you to expand the number of servers in the group, also to set the number of active servers. So for example, at nights you may want to run with only 2 servers, but during heavy periods have 6 active. One deployment benefit of this server group concept is that it lets you easily optimize storage through the use of thin provisioning capabilities, or with virtual machines features like “linked clones”. Last year at an HP TSG Industry Analyst conference I demonstrated during the session “HP Adaptive Infrastructure: Delivering a Business Ready Infrastructure” the use of an HP product Site Scope to monitor the response time of a web site, and then based on that response time call the server group API to scale up or down the number of active servers providing an “autonomic response” to demand.  You could also imagine a simpler scheme that could allocate capacity to a service using a calendar based profile to drive this same API.

Normative Definitions

The API I presented isn’t REST based, although one could easily map the web service definition I described to REST if that was important. The reason I focused on web services was simple: looking around the industry, many of the cloud API offerings are accompanied by downloadable libraries for perl, python, java, ruby, C# etc etc that implement the particular REST interface. Good luck on the dependencies interacting with your application! The benefit of the standards based web service implementation is that the interface is normatively described in WSDL and there are many widely used open source and for fee libraries that will happily consume this WSDL and produce the desired language binding for you. The security is WS-Security based, which has been well analyzed and a known quantity to the security groups in many organizations.  I know I will attract comments, but I think REST needs the equivalent of WSDL (RDL?) to stop the hand coding of clients and server language bindings. We need some normative way to publish an industry standard interface that doesn’t depend on a hand coded client library written in a particular language.

Room for more work

We still have some way to go towards creating a truly interoperable interface to the cloud. I think the APIs I discussed address some important areas that I believe we need in that final standard. The interfaces described are functional but by no means perfect and I look forward to comments


Designing infrastructure the way IT wants to work

We've been on the Adaptive Infrastructure journey at HP for several years now.  This week we are announcing an important milestone: BladeSystem Matrix.  We've been really thinking a lot about how customers use IT and ways we can optimize IT infrastructure to make it work better for them.  We recognize that infrastructure exists for applications, which exist for the business.  So we've taken a business and application perspective on how an infrastructure ought to operate.

Deploying an application typically requires an IT architect or team of architects to carefully design the entire infrastructure - servers, storage, network, virtual machines - and then hand off the design to a team of people to deploy, which typically takes several weeks.  This length of time is mostly an artifact of the way IT infrastructure is designed.  So we decided to change this with BladeSystem Matrix.  Now an architectural design is saved out as a template - servers, storage, virtual machines, network, server software image.  Then when it is time to provision an application, it's as easy as saying "make it so" - and in a matter of minutes, the Matrix's converged virtualized infrastructure is automatically configured and the application is ready to run.  In other words, the way it ought to be.

BladeSystem Matrix is the culmination of several years work at HP - creating an Adaptive Infrastructure that is simpler to buy, deploy and keep running optimally.  Applications are easier to provision, maintain, and migrate.  We've spent years proving out this architecture, not just in our labs but in real-world environments, with BladeSystem, Virtual Connect, and Insight Software - so we could learn how IT really operates - and more importantly - how it ought to operate.

Some people tell me Matrix's virtualization sounds sort of like a mainframe.  Others say that the portal interface reminds them of cloud IT.  I guess in a way they are all correct.  But unlike those environments, Matrix will run off-the-shelf x86 applications.  So I guess I've decided that Matrix is it's own thing.

How would you build a cloud starter kit?

Sometimes our blade ambassadors pass questions and ideas around our virtual watercooler.  Today's most interesting one was:

"If you were asked to give a basic spec with a BladeSystem for a large cloud-type, general-purpose computing platform using VMware (up to 100 chassis) for a large corporation, how would you build it and why?"

The question was pretty broad and for a real cloud implementation, we'd defer to the really cool work going on in HP labs on cloud computing.  But but here were some of initial thoughts that came back.

1. Multiple workloads such as VMs are often host memory bound.  Therefore, the bigger the memory the better.  The ProLiant BL495c is especially well suited to this task with multi-core procs and huge memory capacity.   2. What about the network connections? When used in combination with Virtual Connect Flex-10, throughput is extremely high (10Gb per LOM), very configurable, flexible without a bunch of coordination, and you’ll realize 4 to 1 consolidation of the network equipment.  Add Virtual Connect-FC and some shared HP SAN storage and you'd add more flexiblity and cut about half of the fibre channel equipment costs. 3.  Bring it all together with Insight Dynamics and you get a smart capacity planner and orchestrator layer for both the vm's and the apps on physical servers. 

These three alone would be a great start cloud-starter kit that any business would envy. 


Tell us how you'd do it.


Follow Us

About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I work within EMEA ISS Central team and a launch manager for new products and general communications manager for EMEA ISS specific information.
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • Network industry experience for more than 20 years - Data Center, Voice over IP, security, remote access, routing, switching and wireless, with companies such as HP, Cisco, Juniper Networks and Novell.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.