A couple of weeks ago, lots of server admins started deploying the new HP BL460c G6 server blade. Coincidentally, this year is the 20th anniversary of the Compaq SystemPro 386/33 -- the first "PC server" (to use the 1989 throw-back term for "x86 server"). There's a rad (another 1989 word) connection between the two
The same Compaq engineering team that built the SystemPro evolved into the HP ProLiant team that developed the BL460c. Not only are some of the SystemPro inventors still here, but we've still got lots of the original SystemPro specs -- and it's the similarities between the first SystemPro and the BL460c G6 that will surprise you.
I liked VH1's "I Love the 80's" series, but I can't say the same for the fonts on the spare parts list for the SystemPro (ftp://ftp.compaq.com/pub/supportinformation/techpubs/qrg/systempro.pdf). The impact of Moore's Law dominates any comparison to the newest half-height server blade, but some similarities are amazing:
Both are dual-processor servers using the latest Intel CPUs.
Both offer up to 12 slots for memory.
Both support RAID arrays of internal hard drives -- and on both, you can directly attach 8 drives.
Both use Insight Manager and SmartStart software for management and deployment.
Both use the term "Flex" to describe a key technology. "Flex/MP" was the designation for the SystemPro's processor and memory architecture, while "Flex-10" names some of the networking capabilities of the BL460c G6.
Of course, the performance differences are mind-boggling. The original SystemPro's 386 processor ran at 33Mhz, or about 1% of the BL460c G6 top CPU frequency. And back then, the 8 IDE hard drives could combine for about 2 gigabytes of storage...about a hundreth of the capacity of a single modern SAS drive. The BL460c can also hold about 400 times as much RAM...not to mention that the blade is about a tenth the size of the SystemPro.
IT folks have lost lost some capabilities in these 20 years, of course. If you opt for a BL460c G6 over the SystemPro, you're giving up the 2400 baud modem. You'll also have to toss out your stacks of 360K floppy disks -- no floppy drive in the BL460c. And without that floppy drive, how are you going to load the Token Ring network drivers?
I would post some screenshots from one of the SystemPros that's still in our lab...but I can't seem to get my CONFIG.SYS and AUTOEXEC.BAT settings right. Reply back if you can help me with that, or if you have similar fond memories of the industry's first PC Server.
Every 18 months, customers around the world expect to see server performance double for the same or less cost. As you know, it's been this way for some time; a consequence of a little idea called Moore's Law. The good and bad of Moore's Law - great for customers, challenging for server businesses. Our team would tell you that it just takes years of practice to keep up.
My question today is "why doesn't networking obey Moore's Law?" The jump from 1GB to 10GB came years apart and with a huge jump in price. The most over-hyped next step is Fibre Channel over Ethernet (FCoE), well behind 8Gb Fibre Channel but more expensive. But why such a limited view on network consolidation and why is it still so far way? Shouldn't the industry be focused on consolidating all Ethernet ports and cables to cut costs NOW, with 10GB performance delivered today? We think so too. It just takes the will to rethink the sacred cows and challenge the conventional wisdom. More on that in a minute.
I'm sure someone will throw out Metcalfe's Law or McGuire's Law of Mobility to argue my point here, but these benefits are in the consumer realm, not the data center. I wish I could get the same switch upgrade bargins that I do on my cell phones every 2 years.
Continuing and extending the Moore's Law argument, I could ask the same question about price/performance for storage and facilities too.
Yesterday, I received an ad for a 1TB drives for $89! We estimated that the same amount of capacity in a standard Fibre Channel SAN for the data center is around $20,000 for that first TB of storage. How many drives are in the average infrastructure utilizing only 10% of their total capacity? I suspect there are a lot of cheap terabytes to be had out there.
Facilities is a tougher issue because there are so many external forces driving the costs - energy and real estate are the obvious ones. Still, the model for today's data centers are based on blueprints drawn up to support mainframe architectures in the 1960's. Fresher ideas are being implemented today, but it's time to accelerate that innovation here as well.
That all brings us to, "how do we align network, storage and facilities more inline with Moore's Law?"
We think convergence is the key. We talked last month about how that's a much more interesting idea than simply 'consolidation'. Convergence is not only about less, it's about transformation to create a whole new world of possibilities. That means combining things to create something new and usually makes the combined things obsolete.
Server and network convergence such as the combination of Virtual Connect and Flex-10 networking embedded in the BL495c virtualization blade is one example. Moving storage closer to the compute, squeezing out the network is another compelling idea. What about converging infrastructure systems and facilities management, power distribuion and cooling.
What do you think it will take to align networks, storage and facilities closer to the improvements delivered to compute over the last couple of decades?