I read a clip from IDC guy Dan Harrington explaining why he thought the economic downturn crimped x86 server deployments more than Unix server deployments. "It's easier to freeze purchases on x86 , which are a commodity at this point," he said.
Easier to delay x86 deployment? Sounds right. But x86 server = commodity? Hardly.
Clustering was supposed to commoditize x86 servers. Then it was utility computing. Then virtualization. Then the cloud. Ten years ago, I bought into the Commodity Theory. Whitebox servers would soon be king of the data center. x86 is x86, right? Everyone would soon be slapping Tyan and Asus motherboards into off-the-shelf ATX chassis.
But that's not what happened. In fact, the percentage of people using whitebox servers has actually dropped. A lot. IDC still predicts more whiteboxes, but their latest full-year estimates (2008) say that whiteboxes make up about 10% of servers deployed today -- down from about 14% five years ago, and something like half of what it was a decade ago. Server admins have actually been gravitating away from them and toward "fancy" servers from HP, IBM, and Dell.
Why? Well, one reason is that the price tags on an entry-level HP or Dell x86 servers have dropped a lot, to the point that assembling your own servers won't save you money. But lowered cost != commodity.
Clustering, virtualization, and cloud computing actually do better when run on servers that have a few decidedly non-whitebox-style features:
- OS and ISV certifications
- Plug-ins and APIs for mainstream management tools
- Big, knowledgeable user communities who've developed best-practices
- Around-the-globe support, and "one throat to choke" when issues arise.
- Automated deployment tools, power capping, and other things that lower operations costs (which can dwarf acquisition costs).
There does seem to be a growing interest in bare-bones servers. Stuff like the ProLiant SL series and IBM's iDataPlex exemplify this trend. But these aren't general-purpose servers, and they come with some of those key non-commodity features.
It doesn’t happen often, but here is one of those situations in life where that dismal science, economics, can be useful and fun.
First a little definition ….
Price transparency is defined as a situation in which both buyer and seller know what products, services or capital assets are available and at what price.
Now, price transparency is a way of life in the business of standards-based, x86 servers. This even includes blade servers.
Go to any of the major vendors’ web sites such as Dell.com, IBM.com, HP.com or even resellers such as CDW.com and you can freely look up at least what each vendor’s list price is. You can bet that we vendors do this all the time to ensure we are ‘competitive’ with each other. And so do all of our customers. In fact, our customers know our list price and our competitors’ before any of us set foot in a customer’s place. That helps keep us vendors on our toes to deliver better products while keeping prices lower for customers. The x86 market is a living example of price transparency.
Until you get to the newest member of the x86 server community …
Cisco claims on their web site that their recently announced Unified Computing System is “Reducing total cost of ownership at the platform, site, and organizational levels”. The glaring omission is “minus the cost”. One would presume this includes a competitively priced set of compute, storage, enclosure, interconnect, management tool, software licenses and support components. But aside from Cisco’s word for it, this cannot be verified.
This is because Cisco, unlike the other server vendors, does not publish their UCS list price on their web site. Nor do their resellers seem to. This makes it difficult for customers (or competitors) to independently validate features to prices in a standards-based industry.
Given Cisco’s traditionally high margins on network plumbing gear (65% vs. the 20% margins of x86 servers), vendors, analysts and customers could be forgiven if they were suspicious of high prices for UCS. In fact one could see some of Cisco’s UCS prices needing to be three times as high as industry averages to meet their business model.
So come on, how about a call for the free market and industry standards. Are we all about the same price? Is Cisco really cheaper?
Michael P. Kendall
A couple of weeks ago, lots of server admins started deploying the new HP BL460c G6 server blade. Coincidentally, this year is the 20th anniversary of the Compaq SystemPro 386/33 -- the first "PC server" (to use the 1989 throw-back term for "x86 server"). There's a rad (another 1989 word) connection between the two
The same Compaq engineering team that built the SystemPro evolved into the HP ProLiant team that developed the BL460c. Not only are some of the SystemPro inventors still here, but we've still got lots of the original SystemPro specs -- and it's the similarities between the first SystemPro and the BL460c G6 that will surprise you.
I liked VH1's "I Love the 80's" series, but I can't say the same for the fonts on the spare parts list for the SystemPro (ftp://ftp.compaq.com/pub/supportinformation/techpubs/qrg/systempro.pdf). The impact of Moore's Law dominates any comparison to the newest half-height server blade, but some similarities are amazing:
Both are dual-processor servers using the latest Intel CPUs.
Both offer up to 12 slots for memory.
Both support RAID arrays of internal hard drives -- and on both, you can directly attach 8 drives.
Both use Insight Manager and SmartStart software for management and deployment.
Both use the term "Flex" to describe a key technology. "Flex/MP" was the designation for the SystemPro's processor and memory architecture, while "Flex-10" names some of the networking capabilities of the BL460c G6.
Of course, the performance differences are mind-boggling. The original SystemPro's 386 processor ran at 33Mhz, or about 1% of the BL460c G6 top CPU frequency. And back then, the 8 IDE hard drives could combine for about 2 gigabytes of storage...about a hundreth of the capacity of a single modern SAS drive. The BL460c can also hold about 400 times as much RAM...not to mention that the blade is about a tenth the size of the SystemPro.
IT folks have lost lost some capabilities in these 20 years, of course. If you opt for a BL460c G6 over the SystemPro, you're giving up the 2400 baud modem. You'll also have to toss out your stacks of 360K floppy disks -- no floppy drive in the BL460c. And without that floppy drive, how are you going to load the Token Ring network drivers?
I would post some screenshots from one of the SystemPros that's still in our lab...but I can't seem to get my CONFIG.SYS and AUTOEXEC.BAT settings right. Reply back if you can help me with that, or if you have similar fond memories of the industry's first PC Server.
Every time a competitor introduces a new product, we can't help but notice they suddenly get very interested in what HP is blogging during the weeks prior to their announcement. Then when the competitor announces, the story is very self-congratulatory "we've figured out what the problem is with existing server and blade architectures". The implication being that blades volume adoption is somehow being constrained by the very thing they have and everyone else is really stupid.
HP BladeSystem growth has hardly been constrained; with quarterly growth rates of 60% or 80% and over a million BladeSystem servers sold. So I have to wonder if maybe we already have figured out what many customers want - save time, power, and money in an integrated infrastructure that is easy to use, simple to implement changes, and can run nearly any workload.
Someone asked me today "will your strategy change?" I guess given the success we've had, we'll keep focusing on the big problems of customers - time, cost, change and energy. It sounds boring, it doesn't get a lot of buzz and twitter traffic, but it's why customers are moving to blade architectures.
Our platform was built and proven in a step-by-step approach: BladeSystem c-Class, Thermal Logic, Virtual Connect, Insight Dynamics, etc. Rather than proclaim at each step that we've solved all the industry's problems or have sparked a social movement in computing; we'll continue to focus on doing our job to provide solutions that simply work for customers and tackle their biggest business and data center issues.
data center 3.0
Dynamic Power Capping
Welcome to the x86 server market dance floor. We hope you know the Jitterbug.
For a catagory that everyone brushes off as a commodity market, there sure is a lot of excitement these days. First came the rumors that IBM wants out. Ashley Vance at the New York Times wrote about it again last week. Very doubtful. IBM has some serious technology chops, take it from someone that dances with IBM every day. Plus, they know just as we do that a strong server business is the lynchpin of the infrastructure. Then came Cisco....they don't seem to want to answer a the tougher questions, etc.
Lets take all this tap-dancing as an opportunity to understand what it takes to not only survive, but to thrive in the server market. There are three keys to making it: having the right point of view, a strategy to execute and a business model to support it.
Point of view:
To understand the right point of view, you first, you need to know a little secret about the x86/64 server market - it's not about the servers. It's about infrastructure and ecosystem, and that takes years of cultivation, investment and committment. In other words, you really have to want to be here because there are easier ways to make money.
The only way to solve an infrastructure problem is from an infrastructure point of view. It simply falls apart if you come at it from a network, storage or server-centric view of the world. We get this, and that's why more businesses choose HP as their dance partner than any other. And you thought it was just about servers!
The only correct point of view is an integrated one from the business need to the infrastructure service to deliver it.
Let's break down the integration problem this way.
The business needs the service. That comes from the application. The application sits on the server. The server connects to everything else.
Or think of it another way. The work is done on the server. The information sits on the storage. The network is the plumbing that passes the information between the two. All of this lives inside the datacenter and the staff is there as the check and balance to keep all these moving parts working together.
You can see two critical things here: application/server integration and infrastructure/facility/people integration.
Now, it's one thing to say you can integrate all this stuff. It's another to have a business model to support it. Our model works well for a few reasons. First is the ability balance two contridctory ideas: innovation and low cost. You have to have a committment to continuous innovation with the ability to scale that business in order deliver ever lower cost to the customer.
Our early background as a server company taught us not to mess with Moore's Law. We think we've done pretty well at helping customers take cost out and get more over the last years. During that time, we also turned our attention to the rest of the picture - the management, the facility, the storage and yes, the network. Still a ton of work to do here.
How do businesses choose their dance partner on the server dance floor?
application integration, management, connectivity.
Reason 1: You are only as good as your dance partner
Reason 2: You have to keep up with the music
Reason 3: You have to pace yourself
Reason 4: You have to practice