Data Central
Official news from HP

Data Central Exclusive: Rob Enderle analyzes HP’s cloud strategy

 

Following CEO Léo Apotheker’s keynote at last Monday's strategy summit, HP execs Shane Robison and Dave Donatelli gave a more detailed look into the $143 billion* cloud computing market, HP’s plan of attack, and some of the solutions HP already has in place.

 

Of course, you can always watch a replay of Shane and Dave’s presentation here (it begins 1 hour, 22 minutes, and 30 seconds into the program) and review their slides.  But we thought it’d be helpful to add an expert’s take to the conversation, so we asked Rob Enderle (principal analyst with Enderle Group) to share his thoughts on HP’s strategy.

 

As usual, Rob had a both educational and insightful point of view.  His point about how servers, storage, and networking must converge to meet the design requirements of the cloud deserves particular notice (as does his surprising analogy to HP’s successful strategy to dominate the imaging and printing market).

 

Take a moment to read Rob’s analysis below -- published in full -- and let us know any thoughts or counterpoints in the comments or on Twitter (you can find Rob on Twitter @enderle and HP @hpnews). 

 

For more on the HP Summit, check out last week’s post.

 

Rob Enderle, Enderle Group

The biggest and fastest growing opportunity for technology companies is in the concept of flexibly hosting services on the internet.  This concept -- which embraces everything from the web side of iTunes and YouTube (for consumers) to email and storage repositories (for businesses) -- is called “the cloud.”


HP is applying the same strategy that allowed them to dominate the printer market to this data center/cloud opportunity.  That strategy is based on a focused attack by the entire company on the singular idea of a simple, flexible, affordable solution that was easy to use, easy to implement and competitively superior to the alternatives.

 

To accomplish this they had to acquire networking, strengthen software, and bring the server and storage groups onto the same team so that the solution could meet the design requirements.   These efforts are particularly attractive to enterprise companies, which represent the largest class, because of the massive opportunities for cost savings and the huge potential to simplify what currently is an excessively complex data center environment.

 

While this is still a work in progress, HP is further down this integration path than any other vendor in their class.   

 

Notes:

*2013 expected market estimate, HP internal analysis

HP nanotechnology research looks to sustain HP server market leadership for the long run

(Update: read the 2/28/11 The New York Times story "Remapping Computer Circuitry to Avert Impending Bottlenecks" for more on this subject)

 

“What will future computer systems look like?” asks HP Labs distinguished technologist Parthasarathy Ranganathan in a cover story for Computer magazine, the flagship publication of the IEEE Computer Society.

 

In his article [PDF], Ranganathan suggests computer science is at what he calls an ‘inflection point,’ one that will provoke a radical rethink of traditional computer system design. 

 

This new generation of systems will likely be built around nanostores, argues Ranganathan.  Nanostores are non-volatile memory chips that also act as processors – and they promise to dramatically change the speed and volume at which computer systems can work. 

 

Colleagues of Ranganathan in HP’s Intelligent Infrastructure Lab  are already developing such memory chips, having in the last several years made significant advances in memristor-based technology.  Memristors, a circuit building block with origins dating to the 1970s, are resistors with memory and represent the fourth basic circuit element in electrical engineering.  Because they can both store information and act as processors, they have the potential to be the building blocks from which nanostores can be built.

 

It won’t be too long, HP researchers believe, before computer engineers like Ranganthan will be able to use such memristor-based nanostores in their new system designs.

 

A new direction for the server market

HP sees its memristor research as key to maintaining the company’s lead in the global server market, says Marc Hamilton, VP of high performance computing for HP’s Industry Standard Servers and Software group. 

 

While the company continued to dominate the competition in 2010 with an industry-leading 31.4% share of worldwide revenue (source: Gartner estimate), Hamilton believes that changes in customer demand will challenge even the most successful players in today’s market.

 

He notes, for example, that many organizations currently relying on x86 industry standard servers are starting to want ‘hyperscale’ or high performance computing (HPC) systems.  These are already being used to run cloud services, in large-scale online social, retail and financial operations and in systems attached to vast arrays of sensors distributed around the world.  

 

Those uses are only going to increase, believes Hamilton. “Consumer goods companies are now using the hyperscale computational fluid dynamics to design their packaging,” he explains.  “And you’re starting to see high performance computers move out of the research lab and into clinical use for genome sequencing.  In fact, almost every large company today is starting to use high performance computing technology.” 

 

But even newer GPGPU-based systems, like one HP built recently for Tokyo Tech, are constrained by the fundamentals of traditional computing architecture.  “If you just use a traditional programming model,” says Hamilton, “they put a tremendous burden on the network and storage interface. People are starting to look at latency-hiding algorithms just to use these systems effectively.”

 

From the ground up: memristors and rethinking the computer

Hamilton and HP colleagues like Ranganathan argue that, to compete in the coming age of exascale computing, technology companies need to reconsider the fundamental assumptions behind traditional computer architecture.  And they believe they’ve found the means to do that with the memristor.

 

HP Labs recently announced a plan to begin manufacturing memristor-based solid state memory chips, an innovation its researchers predict will then be followed by chips able to both store and do computations in one place.

 

Memristor Milestones

“The potential here is to transform computing through shifting the balance of compute, storage and networking,” Ranganathan explains.

 

When stacked together, memristors would, in effect, turn into nanostores that both hold information and do computation on that information.  They could also be packed together into ‘microblades.’

 

“Potentially,” says Ranganathan, “that could let you shrink a whole rack of storage onto a single blade. And then you could shrink a whole data center into a couple of racks of memristor-based nanostores.”

 

A familiar model – for once

There are plenty of thorny technical issues to be resolved before that can happen, of course.  A major question is how lightning-fast memory/processing units would work in a data network, a challenge that HP Labs research is already addressing.

 

Crucially, though, the fundamental concept of using nanostores and microblades is a familiar one, says ISS’s Hamilton.  “It’s similar to what’s being used in big web data centers already,” he explains.  “Only a system like Hadoop, which Google uses, is entirely software based.  This works in much the same way, only much faster.”

 

That’s unusual.  Typically, new compute models require people to think about software differently – and that slows the model’s adoption.  “The computer world has been struggling for the last five years to use the multi-core x86 processor,” notes Hamilton.  “Now they are adapting to the use of GPGPU technology.  But if you look at both the nanostore in a web environment and the microblade in a dataflow model – neither requires you to rearchitect how you have been doing things.  That provides for a very interesting capability to accelerate performance without causing massive change to the software.”

 

The generation after next

HP’s next-generation servers won’t feature memristors – which aren’t due to appear in the first flash-type memory chips for several years.

 

But memristor-enhanced servers are currently on the HP road map, says Hamilton.

 

“Right now we’re doing a lot of conceptual design and looking at how this all fits into our overall architecture,” he explains.  “And what we’re seeing is that this new generation of servers is going to enable us to massively extend our successful Converged Infrastructure strategy by enabling a new kind of holistic control of storage, compute and networking.”

Search
Follow Us


Guidelines

Data Central is the official HP corporate blog, brought to you by the corporate communications team in Palo Alto. Before commenting, please read our community guidelines. For more news and press contacts, visit the HP newsroom. Note: all times GMT

Blogroll