If I'm interpreting the comments from Bob Nusbaum correctly, then Cisco has decided to stop using the term "DCE" (Data Center Ethernet), and will use the terms "CEE" or "DCB" going forward.
That's a good move. As they are generally used, DCE, CEE, and DCB all refer essentially to the same stuff. Our acroynm-translation lists will get a little shorter, and there will be fewer arguments about the technical distinctions between them.
DCB (Data Center Bridging) has the most well-defined meaning. It represents a set of standards that an IEEE task group is developing that, among other things, helps govern multiple traffic types running over an 802 network.
CEE (Converged Enhanced Ethernet), most famous for being part of the recipe that allows FCoE, refers to DCB done over Ethernet. The term CEE was once trademarked by IBM (but no longer is).
DCE (Data Center Ethernet) was Cisco's trademarked term for CEE. Bob Nusbaum (of Cisco) says the term was causing confusion.
On Wednesday, Soni Jiandani announced an expansion of Cisco’s UCS family, adding rack servers to their hardware roadmap.
I’m really scratching my head at this move. Soni says that her goal with these products – 1U and 2U servers each with two Intel Xeon processors – will be to broaden the reach of UCS beyond environments that could adopt Cisco’s blade hardware. That sounds reasonable – it’s like posting your used car on Craigslist, even though you’ve already pasted a “For Sale” sign the window; you’ll reach more potential customers if you do both.
But servers aren’t stand-alone products, and hanging a “For Sale” sign on a pizza box server won’t solve any problems. A server has to have a big ecosystem of software, services, and compatible solutions around it. In fact, I’ve said before that nobody wants to buy a server at all ; the only reason they acquire them is because it’s a necessary component to get their email system or online trading system working. Over the years, HP has built up hundreds of software partners and certifies thousands of solutions on its servers. And that’s why ProLiant servers have been the most popular server line for over ten years. I can’t see Cisco developing that kind of solution expertise overnight. Plus, since these servers aren't slated for release for another 6 months, it sets Cisco even further back in developing that ecosystem.
Another thing Cisco’s announced this week touched on partners and training. John Growdon revealed plans for two new Cisco certifications around IT architecture design: “Data Center Architect” and “Data Center Engineer”. Per John, these certs for individuals will help current Cisco-certified network fabric experts expand their skillset, so they can help end-users with the servers and other pieces from UCS.
(The Register quotes John as acknowledging that 70% of those experts are already selling and deploying servers today. Hopefully these new certs won’t require these DCNIs to go back and rip out every server they’ve installed in the past!)
Kidding aside -- I agree with John completely that the people designing any part of your data center – whether it’s the servers, networks, or CRAC units – should be highly trained. I think you also want them to have lots of experience, too. HP and HP partners have been designing data centers and IT architectures for decades. We’ve racked up millions of hours in testing and operations experience. That’s why people trust HP when it comes to the data center. Cisco has years of experience in the networking realm, and that’s probably why they have one of the industry’s most respected networking certification programs. But their lack of experience on the server side makes me question how much value a Cisco certification in servers would have.
So, that’s why I’m scratching my head. Cisco has lots of smart people -- but I can't understand these new UCS announcements.
This one of the methods used to provide fail over for network traffic.
From our BladeSystem support staff
"This is just a “redundancy mechanism” that is part of the HP Teaming driver.Not trying to over simply things but here it goes. NIC A and NIC B are teamed together. NIC A is receiving frames okay, NIC B has not received any frames.Teaming driver sends “Test Frame” out NIC A, sets timers and waits for NIC B to receive it. This is repeated a few times.If NIC B still has not received a frame after a few intervals, it is determined the RX (Receive) Path to this NIC is broken, thus NIC is marked as “Receive Path Validation” and is removed as an Active member of the team. This is a good thing because we don’t want a NIC in the team that can’t hear… So…either there is a problem with that NIC receiving frames (fix it) or there is a problem in the Teaming driver detecting “false” RX Path failures.Either way the recommendation is never to disable it as a solution to the problem.
Every 18 months, customers around the world expect to see server performance double for the same or less cost. As you know, it's been this way for some time; a consequence of a little idea called Moore's Law. The good and bad of Moore's Law - great for customers, challenging for server businesses. Our team would tell you that it just takes years of practice to keep up.
My question today is "why doesn't networking obey Moore's Law?" The jump from 1GB to 10GB came years apart and with a huge jump in price. The most over-hyped next step is Fibre Channel over Ethernet (FCoE), well behind 8Gb Fibre Channel but more expensive. But why such a limited view on network consolidation and why is it still so far way? Shouldn't the industry be focused on consolidating all Ethernet ports and cables to cut costs NOW, with 10GB performance delivered today? We think so too. It just takes the will to rethink the sacred cows and challenge the conventional wisdom. More on that in a minute.
I'm sure someone will throw out Metcalfe's Law or McGuire's Law of Mobility to argue my point here, but these benefits are in the consumer realm, not the data center. I wish I could get the same switch upgrade bargins that I do on my cell phones every 2 years.
Continuing and extending the Moore's Law argument, I could ask the same question about price/performance for storage and facilities too.
Yesterday, I received an ad for a 1TB drives for $89! We estimated that the same amount of capacity in a standard Fibre Channel SAN for the data center is around $20,000 for that first TB of storage. How many drives are in the average infrastructure utilizing only 10% of their total capacity? I suspect there are a lot of cheap terabytes to be had out there.
Facilities is a tougher issue because there are so many external forces driving the costs - energy and real estate are the obvious ones. Still, the model for today's data centers are based on blueprints drawn up to support mainframe architectures in the 1960's. Fresher ideas are being implemented today, but it's time to accelerate that innovation here as well.
That all brings us to, "how do we align network, storage and facilities more inline with Moore's Law?"
We think convergence is the key. We talked last month about how that's a much more interesting idea than simply 'consolidation'. Convergence is not only about less, it's about transformation to create a whole new world of possibilities. That means combining things to create something new and usually makes the combined things obsolete.
Server and network convergence such as the combination of Virtual Connect and Flex-10 networking embedded in the BL495c virtualization blade is one example. Moving storage closer to the compute, squeezing out the network is another compelling idea. What about converging infrastructure systems and facilities management, power distribuion and cooling.
What do you think it will take to align networks, storage and facilities closer to the improvements delivered to compute over the last couple of decades?
Aaron continued his series, 'Blades and Virtualization Aren’t Mutually Exclusive' this week. It's a very good read, especially if you're new to virtualization on blades and trying to figure out the best configuration.
Part 3 covered IBM Traditional Expansion Options and discussed different options for drives, memory and network connectivity in support of virtual servers.
Part 4 basically did the same thing for HP BladeSystem today. Aaron promised more analysis of our virtualization blade and Virtual Connect Flex-10 in future posts. Those are the ones I'm most looking forward.
There are lots of great comments on both of these posts, so you have something to add check out it and share your thoughts with Aaron.