Reality Check: Server Insights
Get HP server news, technology trends, and product information to stay up to date with what is happening in the server world.

The IT behind the big blockbuster sci-fi hit “Avatar”

The hit sci-fi movie Avatar has made more than $1.3 billion dollars and many say is on track to blow the existing all-time top box-office earner Titanic, out of the water.

Creating the amazing digital effects that are transporting viewers to a whole other world --and ones particularly well-suited to a 3-D experience -- required some super powerful IT.

Weta Digital is a small animation studio in New Zealand, who was also behind the digital effects for The Lord of the Rings movies, Enchanted, King Kong and others.

In an Information Management article by Jim Ericson "Processing Avatar", we learn more about the HP ProLiant blade servers and data center infrastructure Weta Digital relied on to make this world come to life:

“Weta Digital is really a visual effects job shop that manages thousands of work orders of intense amounts of data. That preselects most of the fast, constant capacity equipment required. The data center used to process the effects for AVATAR is Weta’s 10,000 square foot facility, rebuilt and stocked with HP BL2x220c blades in the summer of 2008.

The computing core - 34 racks, each with four chassis of 32 machines each - adds up to some 40,000 processors and 104 terabytes of RAM. The blades read and write against 3 petabytes of fast fiber channel disk network area storage from BluArc and NetApp.

All the gear sits tightly packed and connected by multiple 10-gigabit network links. “We need to stack the gear closely to get the bandwidth we need for our visual effects, and, because the data flows are so great, the storage has to be local,” says Paul Gunn, Weta’s data center systems administrator.”

HP ProLiant BL2x220c blade servers pack the power of 2 blades servers in the physical space of 1. These super powerful blade servers are perfect for maximizing performance per square foot in this kind of scale-out computing environment, particularly useful when data center space is also limited.

Are you creating something cool with your ProLiant blade servers? If so, share your story with us.

HP Technology at Supercomputing 2009 video

Take a look around HP’s technologies at Supercomputing 2009 in Portland: new ProLiant SL, Unified Cluster Portfolio, HP POD, NVIDIA Accelerator, Converged Infrastructure, double dense blades

I received a new HPC Multi-core server today – Intro to power measurement


Until a couple of years ago, when we referred to performance measurement of an application, we meant the amount of time that it took the job to run vs. the specific resources it used - number of cores, number of servers if you are using a cluster, the specific characteristics of the server cores and memory and other server specs, plus IO/storage resources and specs. 

Basically, we only measured one thing, the elapsed time of the job.  Then, using the resources and specs, we computed lots of things - throughput efficiency, parallel scalability and efficiency, performance per core or per server, IO metrics, etc, etc.


Now, we make an additional measurement - power utilization, which we correlate in time with the execution of an application.  We want to know the average power used during the execution of a single job, and we also look at the variation of power during a job, and the maximum power used.

Of course, lots of people measure power used by computers.  But, since most of these people are system managers or system designers, they don't have a reason to correlate power with specific applications and compute jobs.  They want to know the average and peak power used to run their overall workload, so they can plan for current and future power requirements.  This is important work, but it does not give them the ability to optimize their workload.

If you measure the average power used during the execution of one compute job, and you multiply that power by the elapsed time of the job, you have Application Energy - the electrical energy used to run that specific job.  This is a very convenient quantity, since it gives you a single number that relates power usage to compute jobs.  You can use Application Energy to optimize your workload, just as you use elapsed time.


A couple of examples:

1.  You can measure application energy for a given set of applications on two or more different server models, and then select the more energy-efficient model.  You can use this App Energy comparison together with elapsed time comparison, and then make speed vs. energy tradeoffs.  If a job runs 30% faster but consumes 50% more Application Energy on Server A than it does on Server B, which is a better choice for your requirements?

2.  We are also using Application Energy to determine the most efficient way to run applications which run in parallel on a cluster of servers - a common way to run HPC codes.   For one common HPC application, we ran the same job at 3 levels of parallelization and compared the elapsed times and Application Energy.  We showed that the job used only 4% more Application Energy running 32-way-parallel (on 32 cores) vs. 16-way parallel.  But the job used 20% more Application Energy running 64-way-parallel vs. 32-way-parallel.  In other words, there is very little energy cost using 32 cores and returning the results to the user much faster vs. using 16 cores.  But there is a substantial energy cost to use 64 cores, which returns the results even faster.


Does anyone find this interesting, or agree (or disagree) with this approach?

I received a new HPC Multi-core server– Learning from standard benchmarks


Now that we have configured the hardware components and firmware settings in a known and hopefully optimum way, it is time to run the 1st performance test.  Personally, I like to run the memory bandwidth benchmark lmbench first, since there is a lot to learn from it.  This benchmark computes the time required to move different amounts of data from the caches and memory to a core.

A modern server has multiple levels of caches - 2 or 3 levels.  The highest level may be shared among several cores.  The output of lmbench shows plateaus for each level of cache, showing both the latency to the cache and also the size of the cache.  If the system has NUMA memory organization, then lmbench shows the latency for each of the NUMA "hops", as the data traverses the system topology.

Usually, the latency of each cache level is a fixed number of processor clock cycles.  It is both interesting and important to know this number.  It's interesting, because it allows you to compare different cache architectures, even if the systems you are comparing have different clock speeds.  For example, it is interesting to me that the 1st level cache latency of several modern servers, with very different architectures, is 3 cycles.

And it is important, because you are not always sure what clock speed your system is running, and you can compute it using lmbench.  Or, you might encounter the problem we hit yesterday - on a 2-processor pre-production server, the two processors were running at 2 different clock speeds!  This is of course not good - a user observed strange performance, but lmbench took the mystery out of the problem and told us exactly what was happening.

You can run lmbench in different ways, and each provides additional understanding. 

The "stride" method runs sequentially through memory addresses and provides best-case latency. 

The "random" method accesses memory addresses randomly, showing the worst-case latency.  It is very useful to know the best-case and worst-case latencies - if their ratio is small, then memory performance is somewhat predictable.  If the ratio is large, prediction becomes difficult.

If you set lmbench to access memory in units of the VM page size, then you can observe the impact of TLB misses on latency.

And, if you run lmbench simultaneously on cores which share a cache, you learn about the behavior of the shared cache.

Memory latency is a great tool if you need to unravel the architecture of a new server!

HP Tops TBR Customer Satisfaction Survey

Technology Business Research (TBR) recently released its 2Q-2008 x86 Server Customer Satisfaction Study results, and the news is EXCELLENT for HP this quarter.

HP ProLiant achieved the #1 ranking in both the Weighted Satisfaction Index (WSI) and the Overall Satisfaction Rating. The WSI ranking is a weighted customer satisfaction score that considers what is important and how satisfied customers are with each area of customer satisfaction. The overall satisfaction rating is the response from customers when asked “Overall how satisfied are you?”.  Together, these two indicators provide us with a quick snapshot of customer satisfaction for each vendor surveyed.

With these results, HP now has sole leadership and no longer shares the #1 position with the competition.  HP was the only competitor to demonstrate gains in every category measured by TBR (9 in total). Specifically, the results in Hardware Quality/Reliability, Management Tools, Server Value and Parts Availability were significantly higher than industry averages. TBR identified these four (4) areas as providing a competitive advantage for HP over its competition. HP also scored higher than industry averages in the all-important category of Overall Satisfaction.   According to TBR, customer expectations continue to rise as customers seek out new technology advances to counter serious datacenter challenges.  Customers are also searching for much more than attractive purchase pricing. Server value incorporates all expectations, incorporating everything from reliability to technical support  and management tools. The influence of relationship quality also figures strongly into customers’ perceptions of value. Our focus on delivering the products customers want continues to pay off. For nine (9) consecutive reporting periods, HP has remained ahead of the competition as the brand perceived as most differentiated. Customers continue to recognize the value HP delivers in the ProLiant brand.

And it gets better! Customer Loyalty is another excellent category for HP. For the last seven (7) quarters, HP has held the #1 position in customer loyalty. HP scores in customer loyalty for the last two quarters are the highest scores on record (since 2Q-2005).

One does not have to look far when looking for reasons why HP dominates customer loyalty. Customers recognize differentiated value. One example is in the area of server management. TBR reports that HP customers place management higher in priority than customers of our competitors. Customers expect more from HP and the results show HP has delivered. HP Systems Insight Manager (HP SIM) is a single software tool providing a consolidated view of everything you need to manage your server and it comes with every HP server.  TBR also measures customer expectations, customer priorities, and how well a company is meeting those needs. According to the survey, customers consider hardware quality, parts availability, and overall value as the three most important attributes of a server vendor. For customer expectations, scores show HP is meeting customer expectations in every measurable category, and, more importantly, HP ranks #1 in all three areas.  HP is focused on these attributes and also sees them as critical to our customers’ success. We continue to drive improvements in these areas by working with our developers, partners, and suppliers to drive quality into every server. As an example, by designing servers focused on customer self repair (CSR) customers have the flexibility to schedule and make a repair at their convenience.  Where do we go next? These results have set the bar higher and clearly identified the market leader. Our challenge moving forward will be to accelerate the pace and deliver on the expectations of our customers.  We’re committed to doing this – and it is evidenced in our offerings from the last few quarters. HP is listening to customers and delivering servers that reduce power consumption, expands virtualization, and enable success in the most demanding and complex computing environments. I welcome your comments – tell me how we are doing. Are there areas we can improve? Let me know – HP’s focus is on the customer and we need your input. 
Showing results for 
Search instead for 
Do you mean 
Follow Us

About the Author(s)
  • I am part of the embedded software management team doing UEFI, HP RESTful API & Scripting tools (STK, PowerShell, HP RESTful Interface Tool), etc
  • I am part of ISS Product Marketing, currently managing couple of dual processor ProLiant servers.
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • Delisa Johnson currently leads successful, corporate events for HP Servers and is established as the go-to person for business unit communications regarding launches, executive meetings, wins and business updates.
  • I am a member of the Enterprise Group Global Marketing team blogging on topics of interest for HP Servers. Check out blog posts on all four Server blog sites-Reality Check, The Eye on Blades, Mission Critical Computing and Hyperscale Computing- for exciting news on the future of compute.
  • I work within EMEA HP Servers Central Team as a launch manager for new products and general communications manager for EMEA HP Server specific information. I also tweet @ServerSavvyElla
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Lead architect for Intelligent Provisioning and related HP products.
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • I work in the HP Servers marketing group, managing a marketing team responsible for marketing solutions for enterprise customers who run mission-critical workloads and depend on HP to keep their business continuously running.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • I am a Senior Manager managing external content & social media for HP Servers Awareness. Stay tuned for topics on Mission Critical Solutions, Core Enterprise & SMB Solutions, Next Gen Workload Solutions, Big Data & HPC, Cloudline and HPS Options! Follow me @RubyDatHP
  • I am a member of the HP Servers Product Marketing Group blogging on topics of interest for HP Server Options and Smart Storage products.
  • Working with HP BladeSystem.
  • HP Servers, Global Product Marketing
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.