Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

Integrating BladeSystem Matrix into a Chargeback or Billing system

I got a call last week enquiring how the IaaS APIs of BladeSystem Matrix (Matrix) could be used to integrate with a chargeback or billing system at a customer site. its a snowy day in Boulder and being a fair weather skier I thought I would spend a few moments and put together some examples of how you could do this.

How Matrix calculates a service cost

Matrix service templates are listed in the template catalog, which shows their name, a description of the service template, and an associated service cost. This cost is calculated by adding the individual costs of each of the service elements in the template together. For example, in a service template the service designer specifies costs for each class of server, for each GB of a class of storage, and for each IP address consumed on a class of subnet. The cost of the service is calculated by combining these unit costs with the amount of each type of resource consumed to create a total. The template catalog shows the cost to deploy the template. However, once the service is deployed, the user can choose to add additional storage, or perhaps choose to temporarily release (suspend) a server. When the user adds additional storage, their service cost will increase based on the template unit cost per GB of storage. Similarly when the user chooses to temporarily suspend a server, their service costs reduces, reflecting that they have reduced their resource consumption. I'm showing an example of the cost breakout chart in the Matrix template designer tool.

Linking to a charge back or billing system

The ListServices web service call can be used by an administrative user to return summary information about the services deployed in Matrix. The Web service return includes information on the current resource consumption cost of that service. Let's assume the IaaS provider wants to chargeback to their customers based on a 15 minute usage increment. They could use a single CRON job on their billing system to fetch usage information every 15 minutes, as shown in figure 2 below.

The content of the CRON job is shown in figure 3. Matrix 6.0 includes a handy CLI wrapper which I am going use in this example. The wrapper is written in Java, so I can run it on any machine and use the underlying web services to connect the Matrix environment. In my example I copied the ioexec.jar file from the Program Files/HP/Insight Orchestration/cli directory to my linux machine. You could also use your favorite wsdl2{java,perl,python,c,.net} tool or the wsdl import feature in Operations Orchestration to create something similar.

Here is my outline of the bash script:

 # sample charge back cron job
# Cron runs script every 15 minutes
# charge_owner: used to apply incremental charge to owner's account
# Inputs: service_name owner cost cost_units
# Returns: !=0 if owner has no more credit
function charge_owner
echo service $1 owner "$2" cost $3 $4
# insert commands to charge customer here!
return 0
# credit_denied: called when owner has no more credit on service
# Inputs: service_name owner
function credit_denied
echo suspend service $1 of owner $2
# Insert commands to handle over drawn customers here
# ioexec deactive service -s "$1" -c chargeback.conf
return 0

# process_chargeback
# Inputs: processes listServices output invoking charge_owner &
#         credit_denied to perform chargeback processing
function process_chargeback
while read -r LINE
    case "$FIELD"
         name)  service="$ARG";;
         cost.value)    cost="$ARG";;
         cost.units)    units="$ARG";;
         ownerName)     owner="$ARG";
                        charge_owner "$service" "$owner" "$cost" "$units"
                            credit_denied "$service" "$owner"  


ioexec list services -o raw -c chargeback.conf | process_chargeback

The script uses the ioexec wrapper to invoke the list Services web service call. I then pipe the results to process_chargeback  to parse the results extracting the service name, current charge rate and charge units, and service owner. The information is passed to the chargeback system via two functions charge_owner and credit_denied. The sample code has a stubbed version of charge_owner, which takes the service name, charge rate, charge units and owner arguments and simply echos them. This routine could be extended to insert the usage information into a database or pass it directly to a charge back system. If the routine returns a non-zero result (indicating an error), then the credit_denied routine is called. This is another stub which, for now, just echos the name of owner and the associated service. This could be extended, as shown, to do other operations - such as invoke the deactivateService web service call to shut down the service when the user has no more credit.

More Complex Scenarios

The example I've given is very simple, but hopefully is enough to get people started on their own integrations. Matrix has additional integration points that can trigger workflows to perform additional actions. An example of one of these triggers is the "Approval" workflow that is used to gate requests for new resource allocation in Matrix. This trigger point could be used to do a credit check on a customer prior to proceeding with a resource deployment operation.

I'd love feedback about the charge back or billing tools people use today, and what kind of out-of-the-box integrations would be most useful.

HP's Technology Contribution to the DMTF Cloud Incubator

I attended the DMTF Cloud incubator last week, and had a opportunity to present to the assembled group about the HP Technology submission to the DMTF I had made a few days before. I'd like to use this blog as an opportunity to talk about this publically.

DMTF Use Cases

The use cases being considered by the DMTF cloud incubator are shown in Figure 1 below, on which I have overlaid the scope of the submitted API.  The focus of the technology submission and my presentation covered APIs in the use case areas of templates, instance and a brief mention about charge back. In the HP products I work on the capabilities associated with administration and user management some of which are listed in the other use cases areas in Figure 1 are typically exposed via the product administrator portal, and the underlying APIs weren’t exposed as part of the submission.

Infrastructure Needs More than just VMs

Many of the published Cloud APIs to date have been at their heart focused on offering virtual machines as a service, which some have commented as being overly restrictive. Moreover, many cloud providers are based on a single hypervisor technology. What this means is if you are using cloud for overflow capacity, then the VMs you create in house may need some transformation to run in the external cloud. Or even worse, if you have 2 cloud providers, you may need a different flavor of VM for each. The DMTF OVF standard isn’t a panacea here, given the virtual machine payload is vendor specific, so if there are n vendors in the market place, someone has to write n (n-1) translators. To date, not all hypervisor vendors even support OVF. The offering I described to the DMTF, and the APIs we provided describe infrastructure services that may contain VMs from different vendors (currently both vmWare ESX and Microsoft HyperV supported), as well as servers not running hypervisors at all.

Three different types of Cloud offering are often described:

  1. Infrastructure as a Service (IaaS)
    This is the focus of the DMTF Cloud incubator.

  2. Platform as a Service (PaaS) – think Google apps

  3. Software as a Service (SaaS) – salesforce.com or gmail

Supporting hypervisor-less configuration is important if you are going to be able provision a wide range of PaaS or SaaS offerings on top of your IaaS infrastructure. For example, perhaps in your Cloud you want to install a database service that will be used by multiple business applications, or may be an email service which has a heavy referenced database component.  In these cases being able to specify and consume one or more 8 core servers with high storage I/O bandwidth would be appropriate. The API I discussed goes beyond just virtual and bare metal servers. It lets you define a service that specifies requirements for server (physical or virtual), storage (whether for example it’s a vmdk or a dual path SAN LUN to a RAID5 volume), network or software, publish that definition in a catalog, and then perform lifecycle operations on those catalog definitions.

Enabling a self-scaling service


Another key element is the ability to define a homogeneous group of servers. If you have a service that uses (for example) 4 web servers, you can define them as a group:  defining the pattern for host naming, the design range of servers in the group (eg 1-6), the initial number to deploy (e.g. 4) and the common characteristics (e.g. 2 core, 8 GB of memory, physical or virtual).  The API includes calls that allow you to expand the number of servers in the group, also to set the number of active servers. So for example, at nights you may want to run with only 2 servers, but during heavy periods have 6 active. One deployment benefit of this server group concept is that it lets you easily optimize storage through the use of thin provisioning capabilities, or with virtual machines features like “linked clones”. Last year at an HP TSG Industry Analyst conference I demonstrated during the session “HP Adaptive Infrastructure: Delivering a Business Ready Infrastructure” the use of an HP product Site Scope to monitor the response time of a web site, and then based on that response time call the server group API to scale up or down the number of active servers providing an “autonomic response” to demand.  You could also imagine a simpler scheme that could allocate capacity to a service using a calendar based profile to drive this same API.

Normative Definitions

The API I presented isn’t REST based, although one could easily map the web service definition I described to REST if that was important. The reason I focused on web services was simple: looking around the industry, many of the cloud API offerings are accompanied by downloadable libraries for perl, python, java, ruby, C# etc etc that implement the particular REST interface. Good luck on the dependencies interacting with your application! The benefit of the standards based web service implementation is that the interface is normatively described in WSDL and there are many widely used open source and for fee libraries that will happily consume this WSDL and produce the desired language binding for you. The security is WS-Security based, which has been well analyzed and a known quantity to the security groups in many organizations.  I know I will attract comments, but I think REST needs the equivalent of WSDL (RDL?) to stop the hand coding of clients and server language bindings. We need some normative way to publish an industry standard interface that doesn’t depend on a hand coded client library written in a particular language.

Room for more work

We still have some way to go towards creating a truly interoperable interface to the cloud. I think the APIs I discussed address some important areas that I believe we need in that final standard. The interfaces described are functional but by no means perfect and I look forward to comments


PODs and Hovercraft

Outside of my window, they're pouring the cement on the bays where future HP PODs will be tested.  Here are some pics.

A POD (Performance-Optimized Data center) is a 40-foot container filled with servers, storage, and other IT gear.   HP can build & ship one of these in just 6 weeks.

The large black rectangle in the middle of the first two pics is where PODs will get tested, once they're filled with gear.  Each yellow U-shaped bar will be the end of a 'bay' that supports a container.  So up to 5 will fit into this structure at a time.

Near one edge of the floor, you can see a series of metal pipes sticking up in the ground.  These provide connections for power and chilled water from the two buildings you can see bordering the bays. 

I'm still waiting for my turn to drive the little hovercraft machine they're using to smooth the concrete:

Here's a video of Steve Cumings giving a tour inside a POD.

Applications Matter - What Affects Server Power Consumption: Part 2

 How does the application you are using and what it is doing affect the power consumption of system.

The first thing that everyone looks at when talking about power consumption is CPU utilization.  Unfortunately CPU utilization is not a good proxy for power consumption and the reason why goes right down to the instruction level. Modern CPUs like the Intel Nehalem and AMD Istanbul processors have 100s of millions of transistors on the die.  What really drives power consumption is how many of those transistors are actually active.  At the most basic level an instruction will activate a number of transistors on the CPU, depending on what the instruction is actually doing a different number of transistors will be activated. So a simple register add, for example, might integer add the values in two registers and place the result in a third register.  A relatively small number of transistors will be active during this sequence.  The opposite would be a complex instruction that streams data from memory to the cache and feeds it to the floating point unit activating millions of transistors simultaneously.

Further to this modern CPU architectures allow some instruction level parallelization so you can, if the code sequence supports it, run multiple operations simultaneously. Then on top of that we have multiple threads and multiple cores.  So depending on how the code is written you can have a single linear sequence of instructions running or multiple parallel streams running on multiple ALUs and FPUs in the processor simultaneously

Add to that the fact that in modern CPUs the power load drops dramatically when the CPU is not actively working, idle circuitry in the CPU is placed in sleep modes, standby or switched off to reduce power consumption.  So if you're not running any floating point code, for example, huge numbers of transistors are not active and not consuming much power. 

This means that application power utilization varies depending on what the application is actually doing and how it is written.   Therefore depending on the application you run you will see massively different power consumption even if they all report 100% CPU utilization.  You can even see differences running the same benchmark depending on which compiler is used and whether the benchmark was optimized for a specific platform or not and the exact instruction sequence that is run.

The data in graph below shows the relative power consumption of an HP BladeSystem c7000 Enclosure with 32 BL2x220c Servers.  We ran a bunch of applications and also had a couple of customers with the same configuration who wre able to give us power measurements off their enclosures.  One key thing to note is that the CPU was pegged at 100% for all of these tests, (except the idle measurement obviously).

As you can see there is a significant difference between idle and the highest power application, Linpack running across 8 cores in each blade.  Another point to look at is that two customer applications, Rendering and Monte Carlo, don't get anywhere close to the Prime95 and Linpack benchmarks in terms of power consumption.

It is therefore impossible to say what is the power consumption of server X and comparing it to server Y unless they are both running the same application under the same conditions.  This why both SPEC  and the TPC have been developing power consumption benchmarks that look at both the workload and power consumption to give an comparable value between different systems.

SPEC in fact just added Power Consumption metrics to the new SPECweb2009 and interesting enoughly the two results that are up there have the same performance per watt number, but they have wildy different configurations, absolute performance numbers and absolute wattage numbers. So there's more to performance per watt than meets the eye.

The first part of this series was Configuration Matters

Showing results for 
Search instead for 
Do you mean 
Follow Us

About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I am a member of the Enterprise Group Global Marketing team blogging on topics of interest for HP Servers. Check out blog posts on all four Server blog sites-Reality Check, The Eye on Blades, Mission Critical Computing and Hyperscale Computing- for exciting news on the future of compute.
  • I work within EMEA HP Servers Central Team as a launch manager for new products and general communications manager for EMEA HP Server specific information. I also tweet @ServerSavvyElla
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Working with HP BladeSystem.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.