Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

Customizing BladeSystem Matrix Allocation Rules Engine for Multi-tenancy Solutions

Early this week I was in a couple of halo meeting sessions with folks in our Bangalore India location, taking about "the next big thing". It reminded me that the last thing we worked on - exposing an extensible rules engine into the allocation and placement - was part of the BladeSystem Matrix 6.0 release. I wanted to talk a little about that capability today and give an example of how it can be used in deployments involving multi-tenancy.


BladeSystem Matrix Allocation and Placement Rules











Allocation and placement has always been a key function of BladeSystem Matrix.


When multi-tier service designs (represented my templates) are submitted for instantiation, it is the allocation and placement function that looks at the requirements for the service in terms of individual element specifications, desired service topology and lease period and them binds these to the available resources in the environment based on their characteristics and capacity, availability calendar, and physical topology.


In BladeSystem Matrtix 6.0, this allocation process can be customized by an extensible rules engine. Overall there are 18 different allocation rule sets that can be extended as shown in figure 1. The policy.xml file specifies which of the rule sets should be used. These are further explained the in the Insight Orchestration User Guide on page 48.


 



 
Figure 1 Extensible Rules sets




 


Mutl-tenancy Example











A very common use case I hear from customers is the desire to have a common design for a service but to have some aspects of the resource binding to be determined by the identity of the service owner.


In this scenario, we consider a provider who is servicing two competitors like Marriott and Hilton hotels but wants to put offer a common service template in the catalog. The desire is that when Marriott deploy a new instance of the service, that service instance would connect to Marriott-Corporate network segment. However, if Hilton deploy the service, then their service instance would connect to the Hilton Corporate network segment.




Figure 2. Pre-configured networks for the two competing  corporations




Setting up your Service Template











Here we show a portion of a simple single server template as an illustrative example. This is a multi-homed server with



  • 1. a connection to the corporate network. The network is named "@corporate". Later on in the rule engine we will look for the "@" sign in the name to trigger special rules processing

  • 2. a connection to an internal network private to the service "net1".



 


Figure 3 Sample Multi-tenancy configuration




 Adding the processing Rule


The rules engine is based on Drools. The rules are written expressed in Java with a Drools rule semantic wrapper. I'll give you a boiler plate wrapper to get you started below. This rule and the Java function are appended to the SubnetCheck.drl file. I'm going to show a very simple example, but can imagine that the creative community will quickly come up with some more sophisticated implementations. In figure 4, I show a simple rule. The rules processing is invoked to refine the candidate networks for allocation to the new service instance. The rule runs for each network (LogicalNetwork) specified in the template, and for each candidate network in the environment. The purpose of the rule processing is to discard candidates that "don't fit".


This snippet basically extracts the information about the subnet specification in the template (the $logicalSubnet), the candidate list of networks ($subnet) from the context ($pVO). It invokes a function customerSpecificSubnetCriteriaCheck to perform the actual processing. 


rule "CustomerSpecificSubnetCriteria"
       when
               $pVO : PolicyExecutionVO( );
               $resLst : List();
               $logicalSubnet : LogicalSubnet();
               $subnet : Subnet() from $resLst;
              eval(customerSpecificSubnetCriteriaCheck($logicalSubnet, $subnet, $pVO)); 
       then
             
              // match processing is embedded in customerSpecificSubnetCriteriaCheck
              // $pVO.match($subnet, HPIOMessage.get(HPIOBundleKey.ALLOCATION_CRITERIA_CUSTOM, "CustomerSpecificSubnetCriteriaCheck succeeded"));
end


Figure 4. Boiler plate rule example


The function code is placed in the drl file after the rule statement. Here is the snippet


function boolean customerSpecificSubnetCriteriaCheck(
                                         LogicalSubnet logicalSubnet,
                                         Subnet subnet,
                                         PolicyExecutionVO pVO) {

       AllocationEntry ae = pVO.getAllocationEntry();
      
       InfrastructureService service = ae.getInfrastructureService();

       String serviceName = service.getName();
       String owner = service.getOwner().substring(owner.lastIndexOf("\\")+1); // strip domain
       String lsName = logicalSubnet.getName();
       String psName = subnet.getName();

       System.out.println("Service: " + serviceName + " Owner: " + owner);
       System.out.println("LogicalSubnet: " + lsName + "Physical Net: " + psName);
      
       boolean match;
      
       if (lsName.beginsWith("@")) {
              String key = lsName.substring(1); // strip off @
              // March @key to networks with Id "owner-key"
              match = psName.equalsIgnoreCase(owner+"-"+key);
       } else {
              // regular network. Could include additional security checks here.
              match = true;
       }
       if (match) {
              pVO.match(subnet, HPIOMessage.get(HPIOBundleKey.ALLOCATION_CRITERIA_CUSTOM,
                                                                                  "CustomerSpecificSubnetCriteriaCheck succeeded"));
       } else {
              pVO.doesNotMatch(subnet, HPIOMessage.get(HPIOBundleKey.ALLOCATION_CRITERIA_CUSTOM,
                                                                                                      "Could not find customer specific subnet"));
       }
       System.out.println("MATCH="+match);
       return match;
}


Figure 5. Rule processing example


The function starts by getting the information on the InfrastructureService being provisioned.  This contains details of the entire template being provisioned and can be used for additional context aware processing. From this object we extract the service owner name (stripping off the windows domain), as well as the name of the service. It is also possible to extract information such as the "notes" that are specified for the service where additional information may also be encoded by the requestor.  From the LogicalNetwork object we extract the name (ie "@Corporate" or "net1") in lsName. Similarly we extract the physical network name into psName.


I've included some debug lines using System.out.println . These show up in C:\Program Files\HP\Insight Orchestration\logs\hpio.log.


The purpose of this code is to return "FALSE" if the physical network is not a match candidate for the LogicalNetwork specified in the template, otherwise return "TRUE". The rules processing logic requires that if the rule allows an element to be a selection candidate, then the function pVO.match must be invoked for that element. If the element is to be eliminated from consideration, then pVO.doesNotMatch() needs to be invoked listing a reason for the exclusion. As a matter of coding style, you can either include the calls to both these routines in your custom function, OR you can just include the pVO.doesNotMatch() code in the function, and put the pVO.match() innocation in the body of the rule.


For logical networks not beginning with a "@" we just want to return TRUE and let the normal selection rules apply. For networks beginning with "@" we will be more selective, excluding candidates unless they match a specific pattern. For a logical network specified in the template with name of the form "@key" we want it to match against physical networks named "owner-key", where owner is the id of the requesting user. The logic looks for a lsName beginning with "@" and then strips off the "@" to create the key. We then test the physical server name to see if it matches the owner-key pattern.


Configuring the Code


To configure the use of the rules processing, edit C:\Program Files\HP\Insight Orchestration\conf\policy\policy.xml As shown in Figure 6. Once you have updated the policy.xml file you will need to restart the Insight Orchestration service.


<policy enabled="true" name="SubnetPolicyCheck.applyFitting">
    <policy-rule-file>SubnetCheck.drl</policy-rule-file>
    <policy-class-name>policy-class-name</policy-class-name>
</policy>


 Figure 6. Configuring rules processing


Provisioning the Service











Now we are ready to deploy the service. Logging on as user Marriott, I create the service using the template shown earlier in Figure 2. Once the provisioning completes, I can look at the service details page for more information about the service. Select the network named "@Corporate" and then click on the resource details tab. From there I see that the network has indeed been mapped to the Marriott-Corporate network by the customer allocation rules processing.



 


Figure 3 Provisioned Service details




Conclusion


The rules based processing capabilities in BladeSystem Matrix enables simple realization of customized resource allocation processing that can be used to simplify and extend Matrix template deployment. I hope this example helps others to quickly understand the capabilities enabled through this powerful engine and gives a "Quick Start" to writing your own custom rules. If you have cool examples of rule extensions you have implemented, I'd be interested in hearing about them.


Thanks to Manjunatha Chinnaswamynaika for helping me to create this example.


Happy coding :smileyhappy:


 

Integrating BladeSystem Matrix into a Chargeback or Billing system

I got a call last week enquiring how the IaaS APIs of BladeSystem Matrix (Matrix) could be used to integrate with a chargeback or billing system at a customer site. its a snowy day in Boulder and being a fair weather skier I thought I would spend a few moments and put together some examples of how you could do this.


How Matrix calculates a service cost


Matrix service templates are listed in the template catalog, which shows their name, a description of the service template, and an associated service cost. This cost is calculated by adding the individual costs of each of the service elements in the template together. For example, in a service template the service designer specifies costs for each class of server, for each GB of a class of storage, and for each IP address consumed on a class of subnet. The cost of the service is calculated by combining these unit costs with the amount of each type of resource consumed to create a total. The template catalog shows the cost to deploy the template. However, once the service is deployed, the user can choose to add additional storage, or perhaps choose to temporarily release (suspend) a server. When the user adds additional storage, their service cost will increase based on the template unit cost per GB of storage. Similarly when the user chooses to temporarily suspend a server, their service costs reduces, reflecting that they have reduced their resource consumption. I'm showing an example of the cost breakout chart in the Matrix template designer tool.



Linking to a charge back or billing system


The ListServices web service call can be used by an administrative user to return summary information about the services deployed in Matrix. The Web service return includes information on the current resource consumption cost of that service. Let's assume the IaaS provider wants to chargeback to their customers based on a 15 minute usage increment. They could use a single CRON job on their billing system to fetch usage information every 15 minutes, as shown in figure 2 below.



The content of the CRON job is shown in figure 3. Matrix 6.0 includes a handy CLI wrapper which I am going use in this example. The wrapper is written in Java, so I can run it on any machine and use the underlying web services to connect the Matrix environment. In my example I copied the ioexec.jar file from the Program Files/HP/Insight Orchestration/cli directory to my linux machine. You could also use your favorite wsdl2{java,perl,python,c,.net} tool or the wsdl import feature in Operations Orchestration to create something similar.


Here is my outline of the bash script:


 # sample charge back cron job
# Cron runs script every 15 minutes
#
###################################################################
# charge_owner: used to apply incremental charge to owner's account
# Inputs: service_name owner cost cost_units
# Returns: !=0 if owner has no more credit
function charge_owner
{
echo service $1 owner "$2" cost $3 $4
# insert commands to charge customer here!
return 0
}
###################################################################
# credit_denied: called when owner has no more credit on service
# Inputs: service_name owner
function credit_denied
{
echo suspend service $1 of owner $2
# Insert commands to handle over drawn customers here
# ioexec deactive service -s "$1" -c chargeback.conf
return 0
}


####################################################################
# process_chargeback
# Inputs: processes listServices output invoking charge_owner &
#         credit_denied to perform chargeback processing
function process_chargeback
{
while read -r LINE
do
    FIELD=${LINE#*services*.}
    FIELD=${FIELD%%=*}
    ARG="${LINE#*=}"
  
    case "$FIELD"
    in
         name)  service="$ARG";;
         cost.value)    cost="$ARG";;
         cost.units)    units="$ARG";;
         ownerName)     owner="$ARG";
                        charge_owner "$service" "$owner" "$cost" "$units"
                        if
                        then
                            credit_denied "$service" "$owner"  
                        fi;;
    esac
    :
done


}


ioexec list services -o raw -c chargeback.conf | process_chargeback


The script uses the ioexec wrapper to invoke the list Services web service call. I then pipe the results to process_chargeback  to parse the results extracting the service name, current charge rate and charge units, and service owner. The information is passed to the chargeback system via two functions charge_owner and credit_denied. The sample code has a stubbed version of charge_owner, which takes the service name, charge rate, charge units and owner arguments and simply echos them. This routine could be extended to insert the usage information into a database or pass it directly to a charge back system. If the routine returns a non-zero result (indicating an error), then the credit_denied routine is called. This is another stub which, for now, just echos the name of owner and the associated service. This could be extended, as shown, to do other operations - such as invoke the deactivateService web service call to shut down the service when the user has no more credit.


More Complex Scenarios


The example I've given is very simple, but hopefully is enough to get people started on their own integrations. Matrix has additional integration points that can trigger workflows to perform additional actions. An example of one of these triggers is the "Approval" workflow that is used to gate requests for new resource allocation in Matrix. This trigger point could be used to do a credit check on a customer prior to proceeding with a resource deployment operation.


I'd love feedback about the charge back or billing tools people use today, and what kind of out-of-the-box integrations would be most useful.

HP's Technology Contribution to the DMTF Cloud Incubator

I attended the DMTF Cloud incubator last week, and had a opportunity to present to the assembled group about the HP Technology submission to the DMTF I had made a few days before. I'd like to use this blog as an opportunity to talk about this publically.


DMTF Use Cases


The use cases being considered by the DMTF cloud incubator are shown in Figure 1 below, on which I have overlaid the scope of the submitted API.  The focus of the technology submission and my presentation covered APIs in the use case areas of templates, instance and a brief mention about charge back. In the HP products I work on the capabilities associated with administration and user management some of which are listed in the other use cases areas in Figure 1 are typically exposed via the product administrator portal, and the underlying APIs weren’t exposed as part of the submission.



Infrastructure Needs More than just VMs


Many of the published Cloud APIs to date have been at their heart focused on offering virtual machines as a service, which some have commented as being overly restrictive. Moreover, many cloud providers are based on a single hypervisor technology. What this means is if you are using cloud for overflow capacity, then the VMs you create in house may need some transformation to run in the external cloud. Or even worse, if you have 2 cloud providers, you may need a different flavor of VM for each. The DMTF OVF standard isn’t a panacea here, given the virtual machine payload is vendor specific, so if there are n vendors in the market place, someone has to write n (n-1) translators. To date, not all hypervisor vendors even support OVF. The offering I described to the DMTF, and the APIs we provided describe infrastructure services that may contain VMs from different vendors (currently both vmWare ESX and Microsoft HyperV supported), as well as servers not running hypervisors at all.


Three different types of Cloud offering are often described:



  1. Infrastructure as a Service (IaaS)
    This is the focus of the DMTF Cloud incubator.

  2. Platform as a Service (PaaS) – think Google apps

  3. Software as a Service (SaaS) – salesforce.com or gmail


Supporting hypervisor-less configuration is important if you are going to be able provision a wide range of PaaS or SaaS offerings on top of your IaaS infrastructure. For example, perhaps in your Cloud you want to install a database service that will be used by multiple business applications, or may be an email service which has a heavy referenced database component.  In these cases being able to specify and consume one or more 8 core servers with high storage I/O bandwidth would be appropriate. The API I discussed goes beyond just virtual and bare metal servers. It lets you define a service that specifies requirements for server (physical or virtual), storage (whether for example it’s a vmdk or a dual path SAN LUN to a RAID5 volume), network or software, publish that definition in a catalog, and then perform lifecycle operations on those catalog definitions.


Enabling a self-scaling service


 


Another key element is the ability to define a homogeneous group of servers. If you have a service that uses (for example) 4 web servers, you can define them as a group:  defining the pattern for host naming, the design range of servers in the group (eg 1-6), the initial number to deploy (e.g. 4) and the common characteristics (e.g. 2 core, 8 GB of memory, physical or virtual).  The API includes calls that allow you to expand the number of servers in the group, also to set the number of active servers. So for example, at nights you may want to run with only 2 servers, but during heavy periods have 6 active. One deployment benefit of this server group concept is that it lets you easily optimize storage through the use of thin provisioning capabilities, or with virtual machines features like “linked clones”. Last year at an HP TSG Industry Analyst conference I demonstrated during the session “HP Adaptive Infrastructure: Delivering a Business Ready Infrastructure” the use of an HP product Site Scope to monitor the response time of a web site, and then based on that response time call the server group API to scale up or down the number of active servers providing an “autonomic response” to demand.  You could also imagine a simpler scheme that could allocate capacity to a service using a calendar based profile to drive this same API.


Normative Definitions


The API I presented isn’t REST based, although one could easily map the web service definition I described to REST if that was important. The reason I focused on web services was simple: looking around the industry, many of the cloud API offerings are accompanied by downloadable libraries for perl, python, java, ruby, C# etc etc that implement the particular REST interface. Good luck on the dependencies interacting with your application! The benefit of the standards based web service implementation is that the interface is normatively described in WSDL and there are many widely used open source and for fee libraries that will happily consume this WSDL and produce the desired language binding for you. The security is WS-Security based, which has been well analyzed and a known quantity to the security groups in many organizations.  I know I will attract comments, but I think REST needs the equivalent of WSDL (RDL?) to stop the hand coding of clients and server language bindings. We need some normative way to publish an industry standard interface that doesn’t depend on a hand coded client library written in a particular language.


Room for more work


We still have some way to go towards creating a truly interoperable interface to the cloud. I think the APIs I discussed address some important areas that I believe we need in that final standard. The interfaces described are functional but by no means perfect and I look forward to comments


 

First post this year!

Well it is February already and I am just now fulfilling one of my New Year’s resolutions – to start blogging more often.  So here I go.


 


Last week, I had the opportunity to spend a few minutes chatting with Steve Kaplan, a vice president at INX, a Cisco reseller.  Steve is also the author of the blog “By the Bell” where late last year he compared Cisco UCS to HP BladeSystem Matrix.  He and I had the chance to compare our points of view on the applicability of blades.  Needless to say, our point of view here at HP is quite different from Steve’s support of UCS.


 


Here is a summary of a couple of areas that perhaps Steve and I do not yet see eye to eye.


1.      We at HP do not see UCS as comparable in functionality to BladeSystem Matrix, which we believe is in a category by itself.  Why is this?  Unlike other offerings that manage servers or VMs one at a time, Matrix uniquely allows customers to provision and manage the infrastructure of an entire application all at once – all the servers, VMs, storage, networks, and server images – through a service catalog based provisioning portal.  Further, Matrix also has built-in capacity planning tools and disaster recovery tools that are not found in UCS.


2.      We believe that data center power and cooling are substantial costs and challenges for customers and warrant significant attention.  It appears to me that Cisco has largely ignored this in their UCS design.  Not mentioned in Steve’s analysis is the ability for BladeSystem to throttle the power consumption of most chassis components that consume power including CPUs, memory, fans and power supplies to keep infrastructure running efficiently all the time.  Also not mentioned is that UCS requires up to double the amount of data center power allocated per server compared to BladeSystem.


 


While Steve’s analysis is very detailed, he omits general descriptions of the very capabilities of BladeSystem and BladeSystem Matrix that have made BladeSystem the most popular blades platform on the planet – with over 1.6 million blades sold.  (These can be found at www.hp.com/go/bladesystem and www.hp.com/go/matrix).  Anyone interested in hearing more of what I have to say about converged infrastructure and BladeSystem can check out this Information Week article.


 


I appreciate Steve taking the time to write on blades, one of my favorite topics!  I hope the dialogue over what customers find important for their IT infrastructure continues, as this is an important topic for our industry.  Our many years in the blades business has taught us a lot, and we always look forward to the opportunity to share with customers the technologies we can bring to help them save time, reduce power and cut costs associated with managing IT infrastructure, all while becoming more efficient.

Reduce application deployment times with HP BladeSystem Matrix ISV Solutions

Joe Sullivan


Microsoft Technical Solutions Engineer - APS


 With the HP BladeSystem Matrix, you can radically change the way applications are deployed in your datacenter.  Matrix provides an integrated platform of infrastructure (server, storage, and network) resource pools which can be dynamically allocated as application services are provisioned.  This allows you to quickly adapt to the changing demands of your business and addresses the efficiency and agility challenges facing many datacenters today.



At the center of this new datacenter paradigm are the Matrix application service templates. These templates define the minimum resource requirements for an application deployment.  Matrix templates can also include embedded workflow components which allow you to automate post-OS installation tasks such as the installation and configuration of the application.  This allows you to rapidly deploy and automate  the end-to-end process of provisioning your line of business applications. 



With Matrix templates, you can significantly reduce application deployment times, from weeks and months down to hours and days.  Application templates also allow you to free up administrator time from the tedious, repetitive tasks of installing and configuring software to more innovative, revenue generating projects.  For example, deploying an Exchange 2010 template from the self-service portal requires 5 minutes of an administrator's time as compared to the 8-16 hours that deploying and configuring Exchange manually would take an administrator.  And not only do Matrix templates allow you to reduce the time to roll-out application services, they also provide a consistent, reliable, and repeatable process for application deployment. 

To assist you in designing and developing templates for your applications, HP has developed and published a number of Matrix templates supporting key Microsoft applications like Exchange, SharePoint, and SQL.  These templates provide a baseline for designing your own Matrix templates and capture recommended best practices for Microsoft application deployments.  Several of these templates also include examples of embedded workflows for automating the installation and configuration of Exchange.  These workflows provide a framework that can significantly cut down on the design and development time when building your own customized workflows.  
 


Beyond these reference templates, integration into the free HP sizing tools can help you quickly generate Matrix templates based on your specific resource requirements.  This can reduce the time to design and develop a service template for your Microsoft applications (like Exchange 2007 and Exchange 2010) to less than 1 hour.   


Check out the Microsoft application template and workflow packages available today by going to the HP BladeSystem Matrix Template Community site at  www.hp.com/go/matrixtemplates.  And keep checking back in the future as new Matrix templates are always being added.


 

Search
Follow Us


About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I work within EMEA ISS Central team and a launch manager for new products and general communications manager for EMEA ISS specific information.
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • Network industry experience for more than 20 years - Data Center, Voice over IP, security, remote access, routing, switching and wireless, with companies such as HP, Cisco, Juniper Networks and Novell.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation