I just had a discussion about data center networking – a recurrent one! -- that can be summarized as: Which is better, proprietary or open standard?
It’s an evergreen question, common to virtually all IT domains. Proprietary choices are generally attractive, with their promise of solving problems before slower open-standard decisions can be brought to actual implementations. Data center networking heavily relies on proprietary implementations of technologies and protocols, adopted one after the other for decades to solve increasingly challenges to architectures and operations. As a result, adoption of open standards in existing data center networking environments may result in complex transformation; a good migration strategy is a key success factor.
Before launching a probably challenging transformation initiative, the tough question is: Which is the best choice between the two? I prefer to tackle this discussion from a different angle: in which cases is either of the two the best choice? In my experience, a black-or-white approach never works in a real operational environment.
So, how to define your own “grayness” level?
As a practical matter, you should avoid a one-size-fits-all strategy, even when a stretched adoption of proprietary standards may seem to provide an elegant solution to virtually any problem. This approach looks simple and homogeneous across the data center, but it leads to operational challenges as further requirements increase complexity, exponentially expanding configuration options and eventually leading to ad hoc implementations on a case-by-case basis.
Open standards technologies don’t push you to adopt tightly integrated technologies, or to adopt vendors' practices. Interoperability shifts the focus from full adoption of a given paradigm to design of your own "best fit" architectural building blocks, optimized to meet specific classes of requirements and enabling highly standardized sets of services. In the majority of cases, the advantages of interoperability pair with standardization of services, resulting in a simplified infrastructure footprint, streamlined operations and reduced costs of networking.
True, there may be a lack of features that are made more quickly available by a proprietary implementation. But this is often irrelevant from the service perspective, as alternate options can ensure compliance with requirements, and organizations outside IT are often simply not ready or not interested in them.
Depending on its degree of "grayness”, a portion of your data center networking could still need a specific implementation that might be more efficiently handled by proprietary technologies. If this is actually what you need, you should isolate these domains, making sure that proprietary lock-in doesn’t propagate throughout the whole infrastructure. A reference architecture should include methods to separate the two kinds of domains, making them interoperable by standard protocols.
This is one of the aspects that I will discuss in my sessions TB2090 “Network Reference Architecture” and TB 2954 "Network Infrastructure Optimization" at HP Discover 2012 in Las Vegas. Come and meet me in the Discover Zone or a Meet the Expert session, or book your one hour introductory session for the Network Transformation Experience Workshop.