Eye on Blades Blog: Trends in Infrastructure
Get HP BladeSystem news, upcoming event information, technology trends, and product information to stay up to date with what is happening in the world of blades.

"If something doesn't exist, we'll make it"

Over the holidays, I took my daughter to see a movie about a Cajun frog.  I admit to a pang of jealousy as I passed crowds lined up for screenings of "Avatar". However, given my situation -- namely a 5-year-old clamoring for popcorn and a Princess movie -- I made the best choice for my needs.


It turns out those Avatar-watching throngs got to see the result of another best choice; one made by a group of IT experts in Miramar, New Zealand


Weta is an Academy Award-winning studio that did the digital effects for Avatar.  The imaginations at Weta Digital have created some incredible virtual realities. Jim Ericson from Information Management quotes Weta's Paul Gunn as explaining that 'if it's something that doesn't exist, we'll make it.'  Pretty amazing innovations coming from a relatively small place on the other side of the world from Hollywood and Silicon Valley.


In an article and blog, Jim sketches for us the 4000-server facility Weta used to render the VFX of the blockbuster.  One eye-opener: the final output from this behemoth server farm fits on a single hard drive.


Weta's space- and power-constrained facility uses advanced techniques like blades and water cooling.  Performance is a paramount need –so much so that their server clusters comprise seven of the world's 500 largest supercomputers.  But their workloads didn't just need massive scalability, they also required high bandwidth between individual server nodes, and relatively local storage.  


As Jim points out, they chose to build their infrastructure with HP BladeSystem, using the double-dense BL2x220c  server blade.  This very innovative, compact server (shown in the video below) let them achieve, in their words, 'greater processing density than anything else found on the market'. 


Actually, any engineer could stick 64 Intel® Xeon® processors into a 17-inch-high box and get it to run.   However, very few computer companies have the expertise -- and resources -- to make such a thing affordable and efficient, and to be able to warranty  that it will run without pause for 3+ years. 


Even more important: Weta possessed something relatively rare when they chose HP BladeSystem. They were already experts in bladed architectures.  Their prior infrastructure was based on IBM blade servers, so they already expected the space- and power-saving benefits of blades.   Weta was seeking  the best bladed architecture.  And Weta determined that, for them, HP BladeSystem was the best choice.

Harnessing Horsepower: Cores, Capacity, and Code

Last week at IDF, two Intel technologists spoke about different fixes to the problem of compute capacity outpacing the typical server's ability to handle it.
 
For the past 5 years, x86 CPU makers have boosted performance by adding more cores within the processor.  That's enabled servers with ever-increasing CPU horsepower.   RK Hiremane (speaking on "I/O Innovations for the Enterprise Cloud") says that that I/O subsystems haven't kept pace with this processor capacity, moving the bottleneck for most applications from the CPU to the network and storage subsystems.




He gives the example of virtualized workloads.  Quad-core processors can support the compute demands for a bunch of virtual machines.  However, the typical server I/O subsystem (based on 1Gb Ethernet and SAS hard drives) gets overburdened by the I/O demand of all those virtual machines.  He predicts an immindent evolution (or revolution) in server I/O to fix this problem.


Among other things, he suggests solid-state drives (SSDs) and 10 gigabit Ethernet will be elements of that (r)evolution.  So will new virtualization techniques for network devices.   (BTW, some of the changes he predicts are already being adopted on ProLiant server blades, like embedded 10GbE controllers with "carvable" Flex-10 NICs.   Others, like solid-state drives, are now being widely adopted by many server makers.)


Hold on, said Anwar Ghuloum.  The revolution that's needed is actually in programming, not hardware.   There are still processor bottlenecks holding back performance; they stem from not making the shift in software to parallelism that x86 multi-core requires.


He cites five challenges to mastering parallel programming for x86 multi-core:
* Learning Curve (programmer skill sets)
* Readability (ability for one programmer to read & maintain other programmer's parallel code)
* Correctness (ability to prove a parallel algorithm generates the right results)
* Scalability (ability to scale beyond 2 and 4 cores to 16+)
* Portability (ability to run code on multiple processor families)


Anwar showed off one upcoming C++ library called Ct from RapidMind (now part of Intel) that's being built to help programmers solve these challenges.  (Intel has a Beta program for this software, if you're interested.)


To me, it's obvious that the "solution" is a mix of both.  Server I/O subsystems must (and are) improving, and ISVs are getting better at porting applications to scale with core count.

IDC: Right numbers, wrong words

I read a clip from IDC guy Dan Harrington explaining why he thought the economic downturn crimped x86 server deployments more than Unix server deployments.  "It's easier to freeze purchases on x86 , which are a commodity at this point," he said.
 
Easier to delay x86 deployment?  Sounds right.   But x86 server = commodity?  Hardly.


Clustering was supposed to commoditize x86 servers.   Then it was utility computing.  Then virtualization.  Then the cloud. Ten years ago, I bought into the Commodity Theory.  Whitebox servers would soon be king of the data center.   x86 is x86, right?   Everyone would soon be slapping Tyan and Asus motherboards into off-the-shelf ATX chassis. 


But that's not what happened.  In fact, the percentage of people using whitebox servers has actually dropped.  A lot.  IDC still predicts more whiteboxes, but their latest full-year estimates (2008) say that whiteboxes make up about 10% of servers deployed today -- down from about 14% five years ago, and something like half of what it was a decade ago.   Server admins have actually been gravitating away from them and toward "fancy" servers from HP, IBM, and Dell. 


Why?  Well, one reason is that the price tags on an entry-level HP or Dell x86 servers have dropped a lot, to the point that assembling your own servers won't save you money.   But lowered cost != commodity.


Clustering, virtualization, and cloud computing actually do better when run on servers that have a few decidedly non-whitebox-style features:



  • OS and ISV certifications

  • Plug-ins and APIs for mainstream management tools

  • Big, knowledgeable user communities who've developed best-practices

  • Around-the-globe support, and "one throat to choke" when issues arise.

  • Automated deployment tools, power capping, and other things that lower operations costs (which can dwarf acquisition costs).


There does seem to be a growing interest in bare-bones servers.  Stuff like the ProLiant SL series and IBM's iDataPlex exemplify this trend.  But these aren't general-purpose servers, and they come with some of those key non-commodity features.

Thoughts on #HPTF keynote

After thinking over Paul Millers' presentation last night, here's what stuck with me.


Convergence is the mega-trend



  • It's a bigger idea with bigger implications than consolidation and unification - it's transformational

  • Convergence collapses the stack. That means fewer stuff, less cost, complexity in a converged infrastructure.

  • HP is looking at processes, applications, facilities and more as part of convergence - not just server, network, storage


Everything as a Service is about economics



  • Virtual storage delivered as service is a powerful idea for virtual infrastructures.  Think a SAN at the costs of a bunch of disks over Ethernet.  Big economic shift there.

  • Private clouds are about time to market. The example of how one HP customer could go from 33 days to 108 minutes to design, deploy and go live with their infrastructure solution is not only a huge savings, but a huge competitive advantage.

  • Extreme scale brings extreme challenges.  The economics in that reality are about pennies and seconds.  You really have to think differently.


Here are Paul's presentation slides or you can check him on video - taken as a live stream from the front row.  We'll have the fancy version on video later today.


Search
Showing results for 
Search instead for 
Do you mean 
Follow Us
Featured


About the Author(s)
  • More than 25 years in the IT industry developing and managing marketing programs. Focused in emerging technologies like Virtualization, cloud and big data.
  • I work within EMEA HP Servers Central Team as a launch manager for new products and general communications manager for EMEA HP Server specific information. I also tweet @ServerSavvyElla
  • Hello! I am a social media manager for servers, so my posts will be geared towards HP server-related news & info.
  • HP Servers, Converged Infrastructure, Converged Systems and ExpertOne
  • WW responsibility for development of ROI and TCO tools for the entire ISS portfolio. Technical expertise with a financial spin to help IT show the business value of their projects.
  • I am a member of the HP BladeSystem Portfolio Marketing team, so my posts will focus on all things blades and blade infrastructure. Enjoy!
  • Luke Oda is a member of the HP's BCS Marketing team. With a primary focus on marketing programs that support HP's BCS portfolio. His interests include all things mission-critical and the continuing innovation that HP demonstrates across the globe.
  • Global Marketing Manager with 15 years experience in the high-tech industry.
  • Network industry experience for more than 20 years - Data Center, Voice over IP, security, remote access, routing, switching and wireless, with companies such as HP, Cisco, Juniper Networks and Novell.
  • 20 years of marketing experience in semiconductors, networking and servers. Focused on HP BladeSystem networking supporting Virtual Connect, interconnects and network adapters.
  • Greetings! I am on the HP Enterprise Group marketing team. Topics I am interested in include Converged Infrastructure, Converged Systems and Management, and HP BladeSystem.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.