By Jieming Zhu
Almost every emerging technology suffers from the reality and expectation mismatch as a consequence of over-hyping. In the enterprise storage industry, it tends to be further amplified many times. Nobody should treat data, the life blood of business, lightly. Therefore, the more fundamental changes in a technology, the longer it takes to be qualified for mainstream adoption.
Unfortunately, the way some vendors market the SSD technology in the enterprise space gave me an uneasy feeling that we are in over-drive mode hyping it and as an industry over all, we may suffer again. In particular, three types of claims (or shall we say, myths?) are worth examination:
Claim #1: SSD will replace XYZ (type) Hard Drive by 2010 (or fill in with your favorite year, the earlier the better!)
To which, I have only one simple question: has tape gone away?
It seems every year, for the last 20 years or (at least), multi-giga bytes worth of text has been written about why and when tape could, should and would be "replaced" by disks. It's safe to say that many of those gigabytes are still backed up in tapes today and will be for the foreseeable future.
This myth also reminds me of another predication made many years ago: iSCSI will replace Fibre Channel. Not only did Fibre Channel not die, it has been enjoying a healthy growth year after year. It is also evolving into (yet another technology hype candidate?) FCoE.
The point is enterprise storage users rarely rip and replace. The industry always evolves, often times slower than that we expected or hoped, for the right reasons. First, we can not throw data away. Secondly, it's the application needs that ultimately determine customers' infrastructure/device choices. Finally, when it comes to data storage, proven reliability always matters most.
Claim #2: SSD will save you X amount of $dollars$, reduce Y kilo-watts of power consumption, and cut Z square footage of rack space (usually comes with a nice table comparing the total cost and performance of using SSD+HDD vs. HDD alone to achieve some predetermined service level objectives. Again, the bigger the deltas, the better!)
The problem is, whoever presents this nice comparison always conveniently omits such important details as: what application was used for testing? what's the I/O access pattern (e.g. sequential vs. random)? what's the actual useable configuration? etc.
The sad reality is, by the time we have integrated with SSDs in the hybrid environment, upon which the vast majority of enterprise storage subsystem solutions will be based , to make it sufficiently reliable, useful and practical for general purpose storage, we would have easily triple the cost and cut the meaningful performance by multiple factors. With the same configuration listed in those tables, one can also easily come up with opposite cost and performance numbers that bias towards sequential access.
It all depends on the applications! Citing theoretical numbers or contrived testing results can only give misleading expectations.
Part 2 will continue the discussion!
In the meantime, please share your thoughts on SSD/SST. Are you planning to implement and wny? Any challenges you see with SSD/SST? When do you see solid state taking off?
Have a good day.