Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

Customizing the Matrix Self Service Portal

I was playing around with the BladeSystem Matrix creating some new demo videos, and I thought why not dig into the portal skinning features to create a custom look for my system.


The skinning feature is intended to let companies personalize the portal with their logos and product names, replacing the standard HP skin that ships with the product. In this example, my customer is the fictitious Acme Corporation.


Here are the steps I went through:



  1. Copy C:\Program Files\HP\Insight Orchestration\webapps\hpio-portal.war to my desktop

  2. Rename to hpio-portal.zip

  3. Copy hpio-portal.zip to hpio-portal.orig.zip

  4. Unzip hpio-portal.zip

  5. Browse to the hpio-portal/skins directory, and create a new folder "acme". You should see the following:


  6. Copy your new images into the acme directory. The image names, format, and recommended sizes are shown in the table below. I found there is minor flexibility in the sizes of images you create, but in general things look a little nicer if you stick to the sizes shown in the table.































    Filename



    Format



    Recommended Size in pixels
    (w x h)



    LoginScreenLogo.png



    PNG



    90 x 72



    LoginScreenImage.jpg



    JPEG



    408 x 287



    TitlebarLogo.gif



    GIF



    42 x 26



    TitlebarImage.png



    PNG



    300 x 40





  7. Edit the skinConfig.xml file in the skins directory. Here's my updated content:


    <properties>
         <property><key> personalizeMode                 </key><value>insight</value></property>
         <property><key> skinName                        </key><value>acme</value></property>
         <property><key> brandName                       </key><value>Acme Corporation</value></property>
         <property><key> portalName                      </key><value>Developer Self Service</value></property>
         <property><key> bsaURL                               </key><value></value></property>
    </properties>




  8. rezip hpio-portal directory
    Once you have re-zipped hpio-portal, you might want to open it and check that the top component of the zip file is the contents of the hpio-portal folder, and not a folder called "hpio-portal". Windows XP zip default behavior is to create that top level folder in the zip archive. Compare with the contents of hpio-portal.orig.zip to make sure you get this correct. Otherwise your portal won't restart correctly in the later steps.




  9. rename hpio-portal.zip to myportal.war file




  10. Stop the Insight Orchestration service




  11. Rename C:\Program Files\HP\Insight Orchestration\webapps\hpio-portal.war to hpio-portal.war.orig




  12. Copy myportal.war to C:\Program Files\HP\Insight Orchestration\webapps\hpio-portal.war




  13. Delete the folder in the C:\Program Files\HP\Insight Orchestration\work directory with a name starting with Jetty and containing the name hpio.portal.war




  14. Start the Insight Orchestration service




 Updated Portal


Here's my updated login screen to the self service portal, and a zoom in on the updated window bar after login is complete.



 


ioconfig Command


The ioconfig command in the C:\Program Files\HP\Insight Orchestration\bin folder is a useful utility that lets you switch between skins, for example: ioconfig --skin insight will change back to original skin. See ioconfig --help for more information on this command and options.


Send your examples


I hope this quick overview is helpful to getting you started. Send me examples of your self-service portal customizations!

Integrating BladeSystem Matrix into a Chargeback or Billing system

I got a call last week enquiring how the IaaS APIs of BladeSystem Matrix (Matrix) could be used to integrate with a chargeback or billing system at a customer site. its a snowy day in Boulder and being a fair weather skier I thought I would spend a few moments and put together some examples of how you could do this.


How Matrix calculates a service cost


Matrix service templates are listed in the template catalog, which shows their name, a description of the service template, and an associated service cost. This cost is calculated by adding the individual costs of each of the service elements in the template together. For example, in a service template the service designer specifies costs for each class of server, for each GB of a class of storage, and for each IP address consumed on a class of subnet. The cost of the service is calculated by combining these unit costs with the amount of each type of resource consumed to create a total. The template catalog shows the cost to deploy the template. However, once the service is deployed, the user can choose to add additional storage, or perhaps choose to temporarily release (suspend) a server. When the user adds additional storage, their service cost will increase based on the template unit cost per GB of storage. Similarly when the user chooses to temporarily suspend a server, their service costs reduces, reflecting that they have reduced their resource consumption. I'm showing an example of the cost breakout chart in the Matrix template designer tool.



Linking to a charge back or billing system


The ListServices web service call can be used by an administrative user to return summary information about the services deployed in Matrix. The Web service return includes information on the current resource consumption cost of that service. Let's assume the IaaS provider wants to chargeback to their customers based on a 15 minute usage increment. They could use a single CRON job on their billing system to fetch usage information every 15 minutes, as shown in figure 2 below.



The content of the CRON job is shown in figure 3. Matrix 6.0 includes a handy CLI wrapper which I am going use in this example. The wrapper is written in Java, so I can run it on any machine and use the underlying web services to connect the Matrix environment. In my example I copied the ioexec.jar file from the Program Files/HP/Insight Orchestration/cli directory to my linux machine. You could also use your favorite wsdl2{java,perl,python,c,.net} tool or the wsdl import feature in Operations Orchestration to create something similar.


Here is my outline of the bash script:


 # sample charge back cron job
# Cron runs script every 15 minutes
#
###################################################################
# charge_owner: used to apply incremental charge to owner's account
# Inputs: service_name owner cost cost_units
# Returns: !=0 if owner has no more credit
function charge_owner
{
echo service $1 owner "$2" cost $3 $4
# insert commands to charge customer here!
return 0
}
###################################################################
# credit_denied: called when owner has no more credit on service
# Inputs: service_name owner
function credit_denied
{
echo suspend service $1 of owner $2
# Insert commands to handle over drawn customers here
# ioexec deactive service -s "$1" -c chargeback.conf
return 0
}


####################################################################
# process_chargeback
# Inputs: processes listServices output invoking charge_owner &
#         credit_denied to perform chargeback processing
function process_chargeback
{
while read -r LINE
do
    FIELD=${LINE#*services*.}
    FIELD=${FIELD%%=*}
    ARG="${LINE#*=}"
  
    case "$FIELD"
    in
         name)  service="$ARG";;
         cost.value)    cost="$ARG";;
         cost.units)    units="$ARG";;
         ownerName)     owner="$ARG";
                        charge_owner "$service" "$owner" "$cost" "$units"
                        if
                        then
                            credit_denied "$service" "$owner"  
                        fi;;
    esac
    :
done


}


ioexec list services -o raw -c chargeback.conf | process_chargeback


The script uses the ioexec wrapper to invoke the list Services web service call. I then pipe the results to process_chargeback  to parse the results extracting the service name, current charge rate and charge units, and service owner. The information is passed to the chargeback system via two functions charge_owner and credit_denied. The sample code has a stubbed version of charge_owner, which takes the service name, charge rate, charge units and owner arguments and simply echos them. This routine could be extended to insert the usage information into a database or pass it directly to a charge back system. If the routine returns a non-zero result (indicating an error), then the credit_denied routine is called. This is another stub which, for now, just echos the name of owner and the associated service. This could be extended, as shown, to do other operations - such as invoke the deactivateService web service call to shut down the service when the user has no more credit.


More Complex Scenarios


The example I've given is very simple, but hopefully is enough to get people started on their own integrations. Matrix has additional integration points that can trigger workflows to perform additional actions. An example of one of these triggers is the "Approval" workflow that is used to gate requests for new resource allocation in Matrix. This trigger point could be used to do a credit check on a customer prior to proceeding with a resource deployment operation.


I'd love feedback about the charge back or billing tools people use today, and what kind of out-of-the-box integrations would be most useful.

HP's Technology Contribution to the DMTF Cloud Incubator

I attended the DMTF Cloud incubator last week, and had a opportunity to present to the assembled group about the HP Technology submission to the DMTF I had made a few days before. I'd like to use this blog as an opportunity to talk about this publically.


DMTF Use Cases


The use cases being considered by the DMTF cloud incubator are shown in Figure 1 below, on which I have overlaid the scope of the submitted API.  The focus of the technology submission and my presentation covered APIs in the use case areas of templates, instance and a brief mention about charge back. In the HP products I work on the capabilities associated with administration and user management some of which are listed in the other use cases areas in Figure 1 are typically exposed via the product administrator portal, and the underlying APIs weren’t exposed as part of the submission.



Infrastructure Needs More than just VMs


Many of the published Cloud APIs to date have been at their heart focused on offering virtual machines as a service, which some have commented as being overly restrictive. Moreover, many cloud providers are based on a single hypervisor technology. What this means is if you are using cloud for overflow capacity, then the VMs you create in house may need some transformation to run in the external cloud. Or even worse, if you have 2 cloud providers, you may need a different flavor of VM for each. The DMTF OVF standard isn’t a panacea here, given the virtual machine payload is vendor specific, so if there are n vendors in the market place, someone has to write n (n-1) translators. To date, not all hypervisor vendors even support OVF. The offering I described to the DMTF, and the APIs we provided describe infrastructure services that may contain VMs from different vendors (currently both vmWare ESX and Microsoft HyperV supported), as well as servers not running hypervisors at all.


Three different types of Cloud offering are often described:



  1. Infrastructure as a Service (IaaS)
    This is the focus of the DMTF Cloud incubator.

  2. Platform as a Service (PaaS) – think Google apps

  3. Software as a Service (SaaS) – salesforce.com or gmail


Supporting hypervisor-less configuration is important if you are going to be able provision a wide range of PaaS or SaaS offerings on top of your IaaS infrastructure. For example, perhaps in your Cloud you want to install a database service that will be used by multiple business applications, or may be an email service which has a heavy referenced database component.  In these cases being able to specify and consume one or more 8 core servers with high storage I/O bandwidth would be appropriate. The API I discussed goes beyond just virtual and bare metal servers. It lets you define a service that specifies requirements for server (physical or virtual), storage (whether for example it’s a vmdk or a dual path SAN LUN to a RAID5 volume), network or software, publish that definition in a catalog, and then perform lifecycle operations on those catalog definitions.


Enabling a self-scaling service


 


Another key element is the ability to define a homogeneous group of servers. If you have a service that uses (for example) 4 web servers, you can define them as a group:  defining the pattern for host naming, the design range of servers in the group (eg 1-6), the initial number to deploy (e.g. 4) and the common characteristics (e.g. 2 core, 8 GB of memory, physical or virtual).  The API includes calls that allow you to expand the number of servers in the group, also to set the number of active servers. So for example, at nights you may want to run with only 2 servers, but during heavy periods have 6 active. One deployment benefit of this server group concept is that it lets you easily optimize storage through the use of thin provisioning capabilities, or with virtual machines features like “linked clones”. Last year at an HP TSG Industry Analyst conference I demonstrated during the session “HP Adaptive Infrastructure: Delivering a Business Ready Infrastructure” the use of an HP product Site Scope to monitor the response time of a web site, and then based on that response time call the server group API to scale up or down the number of active servers providing an “autonomic response” to demand.  You could also imagine a simpler scheme that could allocate capacity to a service using a calendar based profile to drive this same API.


Normative Definitions


The API I presented isn’t REST based, although one could easily map the web service definition I described to REST if that was important. The reason I focused on web services was simple: looking around the industry, many of the cloud API offerings are accompanied by downloadable libraries for perl, python, java, ruby, C# etc etc that implement the particular REST interface. Good luck on the dependencies interacting with your application! The benefit of the standards based web service implementation is that the interface is normatively described in WSDL and there are many widely used open source and for fee libraries that will happily consume this WSDL and produce the desired language binding for you. The security is WS-Security based, which has been well analyzed and a known quantity to the security groups in many organizations.  I know I will attract comments, but I think REST needs the equivalent of WSDL (RDL?) to stop the hand coding of clients and server language bindings. We need some normative way to publish an industry standard interface that doesn’t depend on a hand coded client library written in a particular language.


Room for more work


We still have some way to go towards creating a truly interoperable interface to the cloud. I think the APIs I discussed address some important areas that I believe we need in that final standard. The interfaces described are functional but by no means perfect and I look forward to comments


 

Mike Klayko, CEO of Brocade nailed it

I'm a sucker for unscripted honesty.  Kudos to Mr. K for taking a deep breath after Cisco's UCS and Project California grandstanding to put some reality in front of the rhetoric. In less than 3 minutes, he really boiled all the hype down and simply articulated what a lot of our customers and partners are thinking too. 


My takeaway: ROI on CapEx in 12 months or less means it's not about our grandious vision of tomorrow, it's about helping you implement your vision of your data center today.  Keep the forklift in the warehouse.




Did we miss something?

Every time a competitor introduces a new product, we can't help but notice they suddenly get very interested in what HP is blogging during the weeks prior to their announcement.  Then when the competitor announces, the story is very self-congratulatory "we've figured out what the problem is with existing server and blade architectures".  The implication being that blades volume adoption is somehow being constrained by the very thing they have and everyone else is really stupid. 


HP BladeSystem growth has hardly been constrained; with quarterly growth rates of 60% or 80% and over a million BladeSystem servers sold.  So I have to wonder if maybe we already have figured out what many customers want - save time, power, and money in an integrated infrastructure that is easy to use, simple to implement changes, and can run nearly any workload.


Someone asked me today "will your strategy change?"  I guess given the success we've had, we'll keep focusing on the big problems of customers - time, cost, change and energy. It sounds boring, it doesn't get a lot of buzz and twitter traffic, but it's why customers are moving to blade architectures. 


Our platform was built and proven in a step-by-step approach: BladeSystem c-Class, Thermal Logic, Virtual Connect, Insight Dynamics, etc.  Rather than proclaim at each step that we've solved all the industry's problems or have sparked a social movement in computing; we'll continue to focus on doing our job to provide solutions that simply work for customers and tackle their biggest business and data center issues.

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us
Featured


About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I work within EMEA HP Servers Central Team as a launch manager for new products and general communications manager for EMEA HP Server specific information. I also tweet @ServerSavvyElla
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • Network industry experience for more than 20 years - Data Center, Voice over IP, security, remote access, routing, switching and wireless, with companies such as HP, Cisco, Juniper Networks and Novell.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.