Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

Displaying articles for: April 2010

What are you managing and why?

There are many parameters that can be set and tweaked in the data center, but does it make sense to micro-manage our systems?  Think of the number of settings available in the Windows Registry or Linux configuration files, storage arrays or switches.  Even something as simple as a NIC can have a dozen or more configuration parameters.  (Remember when we HAD to manage IRQs?)  It’s important to understand what’s being managed and why.


Sometimes it is valuable, even important to manage some of these settings.  Thinking back to the days of 10/100 Ethernet, auto-negotiate for link speed and duplex was very unreliable, and often caused network issues.  Many shops instituted a policy that all NICs and switch ports have hard coded link speed and duplex to ensure reliable connections.  Then comes gigabit Ethernet, and the auto-negotiate issues were resolved.  Some shops kept the hard coded link policy in place, but now it causes issues when someone forgets to set a device.  So when automatic settings were unreliable it was important to manage the settings for reliability.  Now that automatic settings are reliable, manually configuration creates more work, and creates more issues than automatic settings would.  http://tinyurl.com/y42sqcm


Other times manual settings are used to manage scarce resources, network QoS is a good example.  QoS aims to improve the efficiency of a network.  When a network has adequate bandwidth, QOS is not required or even useful for most applications.  When a network is saturated, QoS won’t be effective, so it’s only useful when a network is heavily loaded, but not approaching saturation.   It’s typically the high-cost network providers that advocate for QoS to make use of scarce and expensive resources.  By deploying network designs with flatter topologies and bandwidth that is cost-effective and plentiful, you can eliminate the need for QoS with most applications.


There are many other examples of items that can be managed but shouldn’t.  As an industry we are moving away from micro-managing and towards a utility model. (You wouldn’t want to manage the volts and amps used by your toaster, or calibrate for the thickness of the bread would you?)  If we design systems to work efficiently with default settings wherever possible, we reduce opportunities for error, management workload and cost.


Here are a few things to keep in mind:


·        Keep track of what you’re managing and why


·        When something no longer has to be managed, stop


·        Design systems that minimize the requirement for custom settings

Client Virtualization: There's No Substitute for Experience

I love to grill.  More to the point, I love to grill steaks.  (Hey, I’m from Texas – it’s a requirement!)  Over time, I’ve developed my own special system, which include a specific seasoning mix, just the right temperature on the grill, and the perfect cut of meat, among other things.  And I can’t describe the satisfaction I get from taking a bite of a steak I’ve grilled to my liking, with just the right flavor.  Mmmmmm….


(Hungry anyone?  Please take this moment to grab a snack, then come right back.)


Looking back, there was a journey I had to take to getting this process right.  As with most things I get excited about, I started by consuming as much data as I could get on the subject.  I watched television programs about grilling steaks.  I read articles online.  I conferred with “grillmasters” I trusted on their tips and tricks.  When I felt I was ready, I took a stab at it (no pun intended).  And, wow -  let me tell you – that first steak?


Meh. It wasn’t that great.


It was a bit too charred for my taste.  I later learned I’d left the grill too hot the entire time.  So the next time I grilled, I made an adjustment with the temperature.  But this time, it was too dry because I left it on the grill too long.  I fixed this on my subsequent attempt, but felt the seasoning could have been better.  I learned that certain cuts of meat delivered the right flavoring I was seeking.  Then I experimented with various levels of thickness and hot / cool spots on the grill.  Every chance I had, I honed my skill with research, time, and trial and error.


Eventually, I was able to grill a steak that was perfectly tuned to my liking.   And now, grilling that “perfect” steak – well,  it’s almost second nature.  There’s something to be said for all the experiences and lessons that led me to this point.  In fact, my process now is more efficient that it used to be.  I can quickly spot if there’s a problem with the process because I’ve done it so many times.  And, my family no longer feels the need to keep the pizza delivery number nearby when I grill.


What does all this have to do with client virtualization? 


There’s something to be said for good old fashioned, roll-your-sleeves-up, hard work to hone your skills.    Sure, HP offers the best end to end client virtualization product portfolio, but we also offer the best client virtualization services portfolio as well.   HP Client Virtualization Services provide you with the best means to meet all your client virtualization needs, born from experience and steeped in expertise. We simplify the client virtualization implementation process, by designing the right solution for each customer.  And we can do the same for you. 


Learn more about HP Client Virtualizaiton Services here, including news on an announcement made last week about our exciting Client Infrastructure Services Portfolio.  It's a comprehensive set of services and solutions, built to help organizations extend the benefits from converged infrastructure to the desktop. 


Whether it’s implementing a sophisticated VDI environment, or making a killer ribeye on an open flame, there’s just no substitute for experience.  OK, that's it - I need to fire up the grill.  I knew this would happen...


(Before I sign off, check out a great blog here from our partners at Citrix, highlighting the value that HP Thin Clients can add as you begin planning for Windows 7 migration.)


Until next time,


Joseph George
HP Client Virtualization Team
www.hp.com/go/clientvirtualization

Join us April 27th for the next era of mission-critical computing

HP is holding a global virtual event beginning April 27, 2010 that will introduce their vision and major announcements regarding their products and solutions for the next era of mission-critical computing.  The event will feature the announcement overview, demos, papers, info from the live event in Germany and more.  Participants will also have the opportunity to share reactions to the announcement and ask questions through live expert chat sessions.


HP promises you can learn how you can:


       •  Scale service-level agreements dynamically to meet business needs
       •  Capture untapped resources and react faster to new opportunities
       •  Reduce complexity
       •  Lay the foundation for a converged infrastructure to accelerate business outcomes


Registration is available at: www.hp.com/go/witness


You can also follow updates from the event on twitter with @HPIntegrity   #HPIntegrity


 

Behind the Scenes at the InfoWorld Blade Shoot-Out

When the sun set over Waikiki, HP BladeSystem stood as the victor of InfoWorld 2010 Hawaii Blade Shoot-Out.  Editor Paul Venezia blogged about HP's gear sliding off a truck, but other behind-the-scenes pitfalls meant the world's #1 blade architecture nearly missed the Shoot-Out entirely.
 
Misunderstandings about the test led us to initially decline the event, but by mid-January we'd signed on. Paul's rules were broad: Bring a config geared toward "virtualization readiness" that included at least 4 blades and either fibre or iSCSI shared storage.  Paul also gave us copy of the tests he would run, which let vendors select and tune their configurations.  Each vendor would get a 2- or 3-day timeslot on-site in Hawaii for Paul to run the tests himself, plus play around with the system's management features.  HP was scheduled for the first week of March.


In late January we got the OK to bring pre-released equipment. Luckily for Paul, Dell, IBM, and HP all brought similar 2-socket server blades with then unannounced Intel Xeon 5670 processors ("Westmere").  We scrambled to come up with the CPUs themselves; at the time, HP's limited stocks were all in use to support Intel's March announcement.


HP BladeSystem at InfoWorld Blade Shoot-OutHP's final config: One c7000 enclosure, four ProLiant BL460c G6 server blades with VMWare ESX and using 6-core Xeon processors and 8GB LVDIMMs. Two additional BL460c G6's with StorageWorks SB40c storage blades for shared storage; a Virtual Connect Flex-10 module, and a 4Gb fibre switch. (We also had a 1U KVM console and an external MSA2000 storage array just in case, but ended up not using them.)


To show off some power-reducing technology, we used solid state drives in the storage blades, and low-voltage memory in the server nodes.  HP recently added these Samsung-made "Green" DDR3 DIMMs that use 2Gb-based DRAMS built with 40nm technology. LV DIMMs can run at 1.35 volts (versus the normal 1.5 volts), so that they "ditch the unnecessary energy drain" (as Samsung's Sylvie Kadivar put it recently).


Our pre-built system left Houston three days before I did, but it still wasn't there when I landed in Honolulu Sunday afternoon. We had inadvertently put the enclosure into an extra-large Keal case (a hard-walled shipping container) which was too tall to fit in some aircraft. It apparently didn't fit the first cargo flight.  Or the second one.  Or the third one...


Sunday evening, already stressed about our missing equipment, the four of us from HP met in the home of our Hawaiian host, Brian Chee of the University of Hawaii's Advanced Network Computing Laboratory.  Our dinnertime conversation generated additional stress: We realized that I'd mis-read the lab's specs, and we'd built our c7000 enclosure with 3-phase power inputs that didn't match the lab's PDUs.  Crud.  


We nevertheless headed to the lab on Monday, where we spotted the rats-nest of cables intended to connect power meters to the equipment.  Since our servers still hadn't arrived, two of the HP guys fetched parts from a nearby Home Depot, then built new junction boxes that would both handle the "plug conversion" to the power whips, plus provide permanent (and much safer) test points for power measurements. 


Meanwhile, we let Paul get a true remote management experience on BladeSystem.   I VPN'd into HP's corporate network, and pointed a browser to the Onboard Administrator of an enclosure back in a Houston lab.   Even with Firefox (Paul's choice of browser), controlling an enclosure that's 3000 miles distant is still simple.


Moments Before Disaster...Mid-morning on day #2, Paul got a cell call from the lost delivery truck driver.  After chasing him down on foot, we hauled the shipping case on to the truck's hydraulic lift...which suddenly lurched under the heavy weight, spilling the wheels off of the side, and nearly sending the whole thing crashing onto the ground.  It still took a nasty jolt.


 


Some pushing and shoving got the gear to the Geophysics building's piston-driven, hydraulic elevator, then up to the 5th floor.  (I suppose I wouldn't want to be on that elevator when the "Low Oil" light turns on!)


 


We unpacked and powered up the chassis, but immediately noticed a health warning light on one blades.  We quickly spotted the problem; a DIMM had popped partway out.  Perhaps not coincidently, it was the blade that took the greatest shock when the shipping container had slipped from the lift.


With everything running (whew), Paul left the lab for his "control station", an Ubuntu-powered notebook in an adjoining room.  Just as he sat down to start deploying CentOS images to some of the blades...wham, internet access for the whole campus blinked out.  It didn't affect the testing itself, but it caused other network problems in the lab. 


An hour later, those problems were solved, and performance tests were underway.  It went quick.  Next, some network bandwidth tests.   Paul even found some the time to run some timed tests to evaluate Intel's new AES-NI instructions, using timed test with some OpenSSL tools.


Day #3 brought us a new problem.  HP's Onboard Administrator records actual power use, but Paul wanted independent confirmation of the numbers.  (Hence, the power meters and junction box test-points.) But the lab's meters couldn't handle redundant three-phase connections.  An hour of reconfiguration and recalculation later, we found a way to corroborate measurements.  (In the end, I don't think Paul published power numbers, though he may have factored them into his ratings.)


We rapidly re-packed the equipment at midday on day #3 so that IBM could move into the lab. Paul was already drafting  his article as we said "Aloha", and headed for the beach -- err, I mean, back to the office.


 Paul Venezia (left) and Brian Chee with the HP BladeSystem c7000.


 View toward the mountains from the rooftop "balcony" behind Brian's lab



 

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us


About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I work within EMEA HP Servers Central Team as a launch manager for new products and general communications manager for EMEA HP Server specific information. I also tweet @ServerSavvyElla
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • Network industry experience for more than 20 years - Data Center, Voice over IP, security, remote access, routing, switching and wireless, with companies such as HP, Cisco, Juniper Networks and Novell.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation