Around the Storage Block Blog
Find out about all things data storage from Around the Storage Block at HP Communities.

A partner's story: building a private cloud

CartoonCalvin100X100.JPGBy Calvin Zito, @HPStorageGuy

A few weeks ago on Twitter I saw a partner talking about some testing they had done on the P4000.  I invited Nuno Fernandes to write a guest post here on ATSB talking about what they had done.  Nuno works in Malta for Systetec, an HP partner, and was putting together a solution similar to our HP VirtualSystem but at a bit smaller scale for the market his company serves in Malta. I thought this was interesting so here's the post from Nuno.




Systec Limited, is a Value Added Distributor of HP with a strong focus on the storage, servers, & networking and technical services in Malta, Europe. We are also an Authorized VMware Consultant & Microsoft Distributor and Silver Virtualization Partner.


We decided to build our own HP-based private cloud modeling it after the HP VirtualSystem to provide a flexible test lab to our reseller network and to provide PoCs to our corporate accounts, to help them adopting faster the road to their Private vClouds in a small market economy like Malta – as most of the case studies one can find on the internet are for very large setups which really provide less value for smaller scale environments in countries like Malta.


Setup: 2x HP DL380G7 running VMware vSphere 5 Ent Plus with 32gb ram, 1cpu and 6 nics (2 for iscsi & 2 for mgmt. tasks and 2 for vm data) + 2x P4500 Nodes + 2x HPN 6600 for iSCSI traffic & iSCSI D2D Appliance & HPN 5406zl for End-User/VM Access & HP Data Protector


Basically, our decision for the storage was based on the following considerations:

- easily scalable

- easy to maintain

- standard components

- extremely high-availability

- Virtualization friendly

- iSCSI protocol to make use of existent network skills and infrastructure if possible

- storage software included in the product to reduce capex & opex for snapshots, replication, disk based expansion license etc


This clearly pointed us to the HP LeftHand as an unique solution offering all of the above in a form of an appliance, using the same components as the HP ProLiants thus giving additional comfort levels to our potential customers!


As for the network we selected the HP E6600 due to their large buffer memory, but we also tested HP 3500yl and the new HPN 3800. All the three provided the same result in terms of availability and we achieved our goal to have no packet/port drops – all of the mentioned units offer L3 capabilities and we chose them as if needed to provide switching with L3 capabilities and VRRP for HA at a fraction of the price of a Cisco alternative.


We used VMware on SD cards, and we setup software iSCSI access to the P4000 using ALB (note: I believe this is VMware Accelerating Load Balancer) and the native multi-pathing of VMware; we choose RR (round robin) as our preferred path to the volumes.


The VMs were configured with W2008R2 and each had 4 disks (1 for OS + 1 for Apps + 1 for DB Files & 1 for Logs) and we had several workloads running:

- exchange jetstress

- sqlio

- iometer


The only advantage that we found between the HW iSCSI vs the native SW iscsi of windows/vmware, is that we can have multi-site access concurrent with the HW iSCSI Initiator (supported) vs the SW initiator (also works but logs errors in the log files) and in case of a SAN failure on a multi-site cluster then one if is using SW iSCSI must rescan the datastores to regain access to the LUNs, whilst for HW iSCSI initiators the failure will be transparent and fully automated – so we recommend HW iSCSI if you are planning multi-site clusters if not software iSCSI was fast enough, flexible to work with any nic and with vSphere5 we barely notices any cpu overhead, and with the CPU strength, core, cache increase so fast, we don’t see any other big need to purchase HW iSCSI unless as stated above.


When we expanded our LeftHand Cluster from 1 node to 2 nodes, we noticed a considerable performance increase and a decrease in the latency being registered, nevertheless both sqlio and exchange jetstress successfully concluded their tasks with green pass.


It made a good difference, when we used the HPN E6600 only for iSCSI and we migrated all the other data such as VMotion/Fault Tolerance/Console Management and VM Access to the HPN E5400zl as we completed segregated the iSCSI traffic.


We then connected the D2D Appliance D2D2502i to integrate the VTL directly to our VMs, and in this case we increased the LAN ports from 6 to 8 and we created another vSwitch with two nics to cater for VM access to the VTL iSCSI network – once again segregating iSCSI SAN from iSCSI VTL and the remaining network.


We used Data Protector as our backup tool of choice, with VMware Integration, Exchange, SQL to offer complete online backups from a single tool.


All the management VMs (1 per task) were being run from a DL360 Server and were:

- AD/DNS Server

- Console Management Server for HP LeftHand

- FoM VM

- vCenter with Insight Control Plug-In

- Cell Manager

- MS System Center for VMM (Hyper-V VS1 being currently tested with the same workloads)

- HP System Insight Manager and IRS

- and we used Paessler as the network throughput/latency and links monitoring


What we found and really can help customer adopting such integrated stacks namely like VirtualSystems, are that HP is a flexible yet high performing stack and that a single vender can offer the whole lot from server, network, support, virtualization stack, backup appliance and software, management sw and the most flexible iSCSI SAN in the market!


Customer can and should look at HP VirtualSystems but with the help of partners like us, we can build a similar solution, yet smaller or more adequate to their business need with great peace of mind that even if we start with a 1GbE setup it can be upgraded to 10GbE without complications.


PS – it would be great to have the opportunity to re-use existent storage like (HP or any Competitor) and put provide block level FC storage to the HP LeftHand nodes and then have the SAN/iQ to re-provisioning it back as iSCSI from the LeftHand cluster – we see this as an opportunity to start migrating competitor storage to a HP Lefthand environment at a lower price point and in the long run replace the old storage with HP’s own.


You can follow us on & @nmiguelrf or emails us on for more details.

infonet | ‎05-04-2012 06:56 PM


This is very good post but we are also doing something similar but a bit different way. We tried all-HP  approachbut eventually cost of some parts is high for SMB in our region (Balkan)What we use is:


- 2xDL385 or DL380 planning to add DL585 in to this since cost of DL585 is low but brings 64 cores to the table

- 2xA5120 or A5500 in stack for distributed trunking (cost of CTR services for these switches is very high so we may also test ERS4800 from Avaya)

- 1xFAS2040/FAS2240 (have tried with P4000 VSA but didne't work very well), 8x1GbE+4x10GbE or 4xFC

- 1xUnitrends VM for D2D backup and deduplication (this has been tested as I am writing) - > Asigra is alsosomething we like since DP we like but D2D is just to expensive. Asigra can receive Netapp replication so this is very good for cloud backup if customer lookint at that direction. 

-. vmware - > will do test with Redhat as well since vmware is a bit pricey 

- fortinet with sophos security 

- VDI from Kaviza (Now Citrix) or vbridges (additional DL required)

- t5xxxx thinclient from HP with Liscon thin software (we have to replate Thinpro since it sucks), we will move aslo toward t610 thin client 

- few other components


and we call this "company-in-the-box". Box is 22U rack for companies up to 75 seats. We are also building up something like 50-seats, 100, 150 and 200 seats all infrastructure. Have project now with 300 seats where Ubuntu VDI will be at PoC for customer. 


I know this is not all-HP but HP should pay attention to these customers below 500 seats, requiring 10-20 servers and running between 50-300. There is a lot of companies there like this and having VS 0.5 or 0.2, creating P4000 solution  for small business :-))))


we are now looking at RHEV as well as Parallels to lower cost of virtualization stack (server and desktop). Will see how this turns out. 


Great work Nuno!


Best regards,



‎05-07-2012 06:25 AM - edited ‎05-11-2012 06:44 PM

@Damir - thanks for sharing your experience.  Not clear why you'd start with a software based P4000 VSA solution and switch to a hardware based one; VSA has it's use cases and I'm guessing you were pushing the limits of a VSA.  I'm not sure I'd ever use it as the main storage in a VDI environment.  Did you look at the P4300 or P4500?  With all the work HP is doing to integrate VMware, ProLiant and our storage (like P4000), I wouldn't "endorse" the solution you built - but hey, if it works for your customer, great.  However, I'm sure there was a better answer using HP storage.  If you have more details you'd like to share with our client virtualization team, feel free to email me by clicking this link.  Thanks again!

Nuno | ‎05-10-2012 05:46 PM

Hi Damir,


Thank you for sharing your experiences, however I fully agree with Calvin's point of view - the great thing about the HP VirtualSystems is their reliability and assurance of made to work together, and tested down to the smallest componenent. When you consider all the software/hardware integration and the 'one throat to choke' HP is in the unique position - also if you look for reliability, HP can tolerate 1/2 of the whole rack to fail, and provides great scalabilty.


We are now building with Hyper-V and RDS & we use VSA just to play to show Peer-Motion/Live Volume Migrations but we haven't pushed them to real life criticial solutions.





Showing results for 
Search instead for 
Do you mean 
About the Author
25+ years experience around HP Storage. The go-to guy for news and views on all things storage..

Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.