Apache uses HP ProLiant DL580 servers to deliver flexible, high-powered computing solutions to remote drilling sites
Today’s data center challenges revolve round driving better efficiency and value for businesses as IT has moved from a simple deployment of services into a partner with the business, responsible for helping drive more agility, shortening decision times and optimizing shareholder value.
By Jim Ganthier, VP, marketing, Industry Standard Servers, HP
This past Monday, I spent the afternoon at the impressive de Young Museum in San Francisco for AMD’s Opteron 6100 series launch event.
The event kicked off with AMD CMO, Nigel Dessau, taking us through a brief history of Opteron, with its rich and compelling message, before introducing attendees to AMD’s latest processor technology.
Nigel was then joined onstage by Christine Reischl, SVP & GM of HP’s Industry Standard Servers group, and Matt Lavallee, Director of Technology for MLS PIN, the northeast’s foremost provider of real estate listings to more than 30,000 subscribers. Matt spoke about his key IT challenges (cost, performance and energy) as well as the dramatic efficiencies in these key areas that he has enjoyed thanks to HP ProLiant technology. Matt has been an HP customer since our first generation of servers and is operating an entirely HP shop!
Christine Reischl went on to talk a bit more about the latest generation of ProLiant servers delivering customers like Matt a return on his server investment in just 2 months (less than a budget cycle) when compared to servers just a few generations old. Christine also spoke about the 27x performance per watt boost these systems deliver, as well as the up to 96% reduction in power and cooling costs customers can reap with HP’s innovative Thermal Logic technologies built right in to our servers.
Here's a picture of Nigel, Christine and Matt on stage:
After HP, Nigel welcomed to the main stage a couple of other OEMs with their customers before thanking the OEM and software partners, customers and attendees for being there.
The event then broke to a very informal reception where press, industry analyst, customer and partner attendees could see the latest AMD Opteron-based platforms available. HP showed off our new ProLiant DL165 and DL385 rack-optimized servers, as well as our SL165z, “skinless” scale-out server.
At one point in the day, I found myself huddled in a near deserted shop next to several Nefertiti statues speaking to some business press reporters via phone on the news of the day. Later Monday evening, Christine and I enjoyed dinner with Nigel and Dave Fionda from AMD where we reflected on the day’s events, chatted about our long-standing partnership and the exciting road ahead.
Overall, our time in San Francisco was beneficial to both HP and AMD and we look forward to a continued partnership.
These days, when wearing my “Linux planner” hat, and with Virtualization being the “phrase that pays”, I’m often asked to help provide guidance on how to best take advantage of the technology included in our 8-socket HP ProLiant server offerings for Linux based virtualization solutions like Red Hat Enterprise Virtualization or Suse Linux Enterprise Server Xen (there’s a plethora of information out there about VMware ESX/ESXi 3.5.x and vSphere 4.0, so I’m not going to talk about that, this time around.)
The problem I’ve had, until recently, was providing actual – objective - data as a means to help illustrate my points. For instance, I could not clearly illustrate how a snoop filter on the CPU interconnect can improve the linearity of the workload scalability in a virtualized environment (see Fig. 1).
Fig. 1: Average response time with pinned vs. un-pinned processors
I was unable to demonstrate benefits of the NUMA aware scheduler that the Linux kernel uses and how it does improve performance. (In figures 2 and 3, it’s represented by the improvement in average response times from the web-servers included in the workload) when your workloads run with memory interleaving disabled in the system BIOS – see Fig. 2. Unless, for support reasons, your application vendor explicitly tells you otherwise.
Fig. 2: Average Response Times - Non-interleaved Memory Config
Fig. 3: Average Response Times - Interleaved memory
I also used to have a hard time explaining how and why to tune the Linux kernel for these systems. For instance, I only suspected how little (none) tuning of the host platform is required in order to drive pretty significant numbers of guests (98) in these environments - see Fig. 4. But, if you engage in some very minor tuning activities of the network stack, how those very same workload performance results can be extended even further (to 256 guests) – see Fig. 5:
Fig. 4: The system has not been tuned beyond it's "out of the box" state.
Fig. 5: System is tuned and exhibiting linear scalability to 256 KVM guests
As part of a joint documentation effort with Red Hat, all of the data collected has been brought together in a Reference Architecture document - “Scaling RHEL 5.4 + KVM up to 256 Guests" available for free from Red Hat’s website.
We obviously picked the guest density to prove a point about the platform, however it’s worth mentioning that 256 guests does not represent the upper bound for the platform. It only represents where we thought the density went (far) beyond what is reasonable to expect in a production environment this day in age.
Contributed by Thomas Sjolshagen (Strategic Planner for Linux and Virtualization on scale-up x86 servers)
Another business intelligence leadership proof point for HP ProLiant DL785 G6 – this time with Sybase IQ and Red Hat Enterprise Linux
As I mentioned in one my previous blogs, as x86
processors are getting more powerful, 64 bit architecture is becoming
more mature, memory DIMMs are getting bigger & cost effective, more
and more scalable x86 software applications are becoming available,
scale-up x86 servers are becoming ideal choice for large business
intelligence and decision support system deployment at a cost that no
one imagined a few years ago.
The latest world record performance
result of HP ProLiant DL785 G6, 102,375.3 QphH at $3.63 USD/QphH, on TPC-H @ 1000 GB benchmark running Sybase IQ 15.1 database and Red Hat Enterprise Linux 5.3
operating system is an excellent proof point. This #1 non-clustered performance benchmark result demonstrates
that customers can deploy large business intelligence solutions at an
aggressive TCO on high-performance 8‑socket x86 servers running. In addition to holding
the #1 non-clustered x86 performance result, the DL785 G6 offers outstanding
price/performance maintaining #1 8P price /performance record in the TPC-H @
1000 GB benchmark category.
DL785 G6 with the six-core AMD Opteron™ processors has been designed as
an excellent database server. Its balanced architecture with ample I/O
and memory make it an ideal platform for decision support and business
Hundreds of customers run their database applications on the DL785
The TPC Benchmark™H (TPC-H) is a decision support benchmark, with components that are intended to be relevant to customers who deploy decision support systems as part of their business intelligence solution. The benchmark is comprised of a suite of business oriented ad-hoc queries and concurrent data modifications that examine large volumes of data and execute highly complex queries. Many
businesses find this type of benchmark useful in determining what
servers to utilize because the TPC-H benchmark illustrates decision
support systems that examine large volumes of data, execute queries with
a high degree of complexity, and give answers to critical business