Around the Storage Block Blog
Find out about all things data storage from Around the Storage Block at HP Communities.

FCoE...it's almost time to get moving!

by Sean Fitzpatrick, StorageWorks Storage Platforms Business Development


Over the past few months, we've all witnessed positive milestones for FCoE...but...is it ready? 


This past June the Fibre Channel Standards T11.3 BB-5 (back bone) working group finalized defining the spec (or ratified) and voted to forward it to INCITS for public review and eventually publication next year.   Is the BB-5 spec good enough to develop product? Absolutely!


Also in June HP announced the availability of two ToR (top of rack) FCoE switches from Brocade and Cisco.  Other companies have also announced future availability of FCoE products.  This is positive move in the right direction for the industry and for customers looking to lower the TCO over time.    


More resources and tools are available today from Brocade, Cisco, Emulex and QLogic to assist with evaluation and planning.  HP's own SAN Design Guide provides great design ideas on how to build or modify existing SANs, including a dedicated application note on FCoE implementation guidelines.


As I've stated before, this stage is very important part of the early adopter Phase I to allow customers to evaluate the technology.  The real adoption has yet to develop (phase II) and it will, once additional mature products become available.  In the mean time IT managers should investigate adopting FCoE for small pilot projects and focusing on how it's going to improve their overall TCO.


From a timeline perspective 2009 is the year for proof of concept and planning.  In 2010 I'm anticipating a wider breath of product availability from all the major suppliers.   


HP's position is FCoE won't overtake traditional Fibre Channel next week, or even next year.  But, now that FCoE is starting to move, it's getting exciting.


Tweet this! 

More on SPC-2

Last month we announced a world record SPC-2 number by the XP24000.  At the same time we extended yet another challenge to EMC to join the rest of the world in publishing benchmarks.  They continue to decline the offer, arguing “representativeness”.  I thought I’d clear up the “representativeness” question.

 

EMC’s argument that this XP is too costly starts from the assumption that SPC-2 only represents a video streaming workload.  To quote, “128 146GB drive pairs in your 1152 drive box? A pure video streaming workload?”  We actually see a widely diverse set of workloads used in the XP.  The power of having both SPC-1 and SPC-2 benchmark results is that they provide audited data that applies to almost any workload mix a customer might have.  But if one had to pick a most common workload it would probably be database transaction processing by day, then back up and data mining workloads joining the transaction processing by night.  SPC-2 models the back up and data mining aspects, with SPC-1 representing the transaction processing.  SPC-2 is about a lot more than video streaming.

 

When people need bullet proof availability and high performance for transaction processing they turn to high end arrays like the XP24000.  It’s probably the most common use for a high end array.  Our data indicates that on average the number of disks in an initial XP purchase is right around the 265 in our SPC-2 configuration.  Some of those won’t have the levels of controllers in the SPC-2 configuration.  But an increasing number use thin provisioning.  In those cases they will often get all the controllers they’ll need up front, delaying the disk purchases as you’d expect with thin provisioning.  So the configuration and workloads look pretty representative.

 

Then consider a real use of the benchmark.  A maximum number is key in assessing an array’s performance.  Below that you can adjust disks, controllers, and cache to get fairly linear performance changes.  But when you reach an array’s limit, all you can do is add another array.  So once you know an array’s maximum number you know its whole range of performance.  By maxing controllers we provide that top end number, giving the most useful result.  For sequential workloads like back-up and data mining maxing disk count isn’t necessary, whereas it generally is for random workloads like transaction processing.

 

Now let’s discuss how one might use XP’s SPC-2 results.  Let’s say you need a high end array for transaction processing.  The most common case we see requires backup and data mining operations at night in a limited time window.  Since the XP’s SPC-2 result is twice that of the next closest high end array, you can expect it to get the backup and data mining done with half the resources of the next fastest array.  But with SPC-2 you can go further.  You can look up the specific results for backup and data mining workloads which are around 10GB/s for the XP24000.  Knowing how much data you need to backup and mine you can estimate how much of the system’s resources you’ll need to get those things done in your time window and therefore what’s still left for transaction processing during that window.  You can scale that for the size array you need for transaction processing.  And you can compare to other arrays that have posted results.  All using audited data before you get sales reps involved.

 

SPC benchmarks are all about empowering the storage customer.  XP24000’s SPC-2 result is important to the most common uses for high end arrays, as well as for less common uses like video editing.  The configuration we used looks pretty typical, with choices made to make the result most useful to customers.  The cost is pretty typical for this kind of need.  At HP we expect to continue providing this kind of useful data for customers.  And our challenge to EMC to publish a benchmark result still stands, though they’ll probably continue inventing reasons not to.

EMC, We Challenge You!

EMC, we're once again throwing down the gauntlet.  Today the XP24000 put up the highest SPC-2 benchmark result in the world.  The top spot for such demanding workloads as video streaming goes to the XP.  Once again, your DMX is a no show.  And once again we challenge you, this time to put up an SPC-2 number.  Every other major storage vendor is now posting SPC results.  Every other major storage vendor is now starting to give customers standard, open, audited performance results to show what they've got.  You remain the only vendor keeping your product performance behind a smoke screen of mysterious numbers and internal testing.  We challenge you join us in the world of openness and let customers quit guessing at how low the DMX's performance really is!

FCoE On your mark...get set...Go! (Part 2)

-by Sean Fitzpatrick 


 


Now that we are on the roller coaster of hype, what is the next cycle for FCoE? 

 

I would argue that we are still in the first cycle of its adoption. 

 

If you’re a student of Geoffrey Moore’s Technology-Adoption Life Cycle model; the first logical event is to create and gain market interest or early market. For the sake of argument, I’m calling it ‘hype’ or the same feeling you get when you see a new shinny new penny.  After the penny is passed around and the shine wares off, the next logical cycle is early adopters.

  

But how can you adopt a technology when the products are close to production ready, but not customer ready?

 

In my opinion, production ready is something that can be consistently duplicated through a defined manufacturing process.  At this stage, not all the kinks have been worked out, it doesn’t have to be automated, the products are generation one (Gen 1), alpha or beta versions. Components and labor costs are high, economies of scale are not a concern and quality isn’t the most important output process.

 

Most of the products available today are not what I would call ‘customer ready.’  I define customer ready as a product that has been tested and qualified, supported by a minimum of one operating system, has a set configuration minimum/maximum parameters, can sustain a light to medium work load, and has an errata list of non-supported features / capabilities.   Depending on your own criteria’s it may fall into an Alpha or Beta candidates.

 

So just where is FCoE and how should I be planning? 

 

I’m going to go out on a limb and try to group the adoption phases of FCoE over the next 3-5 years.  As a reference, look back at the history of iSCSI and its life cycle from late 90’s to today.  If we learned anything over the years, nothing moves as fast as we would like it to.

 

From my perspective, 2008 through mid-2009 is about understanding the benefits, limitations, expectations and time will be spent exploring use case scenarios.  This is an excellent time to look at Gen 1 product and do some planning for your next data center refresh or new installations.

 

Late 2009 through 2010, second generation products start to take shape and economies of scale start to show up, Geoffrey More said; ‘Innovation is valuable only if it helps us achieve economic advantage.’  If FCoE is going to become mainstream, there are two hurdles that have to become reality; 1) Lower customer TCO and 2) IT resources (Network, Server, Storage) teams have to learn new ways of working across functions. 

 

Beyond 2011 really depends on how the first couple of years unfold.  These first years will set the stage for market acceptance, use case scenarios and follow on technology innovations.

Solid State Disks – Sorry EMC, Fibre Channel Disks aren’t dead yet! (Part 2)

By Jim Hankins


Our view of Solid State Disks (SSDs), borne out by lab tests, is that some of the workloads that use 15K RPM drives today could take advantage of and cost-justify SSDs. Even then, not all 15K RPM drives are expected to be displaced. Rather, we expect SSD to be used as a premium performance tier in well designed, balanced storage deployments. We expect 10K and 7.2K RPM disk drives will continue to be popular in disk arrays, along with virtualization of external storage devices, as customers consolidate data storage and implement multiple tiers of storage capacity for their business needs.


While SSD offers significantly improved performance and lower energy consumption compared with HDD, SSDs also bear a huge cost premium. Today's average SSD costs many times more than the equivalent capacity and grade of HDD. While the cost difference may close, we expect it to remain significant for several years into the future. These differences means that SSDs will be relegated to applications that require extremely high performance, need relatively little capacity and can justify a very large cost premium.


The trend of decreasing HDD shipment volumes-particularly FC drives-has been underway for quite some time, but it has very little to do with the potential of SSD drives. It is largely due to the rapidly improving performance/capacity/reliability of SAS/FATA/SATA drives. In the past few years, we have witnessed SAS and SATA drives establishing their presence in tier 2 and tier 3 storage environments, then gradually moving up into the primary storage space. We expect that SAS/SATA drives will replace FC/xATA over the next few years for many of the applications that use FC drives today. We view this as a natural evolution of disk drive technology similar to what was seen in the past.


BOTTOM LINE


High-end HDDs will remain around for many years to come. We don't see SSD drives having major market penetration until 2012, and then less than 1/3 of the high-end HDDs shipped. SSD has been very successful with consumers, but it will take many years to be ready for the enterprise and gain adoption. Currently, we ship more than 45% of the disk drives in the market today (according to IDC) and are constantly monitor disk drive market conditions and trends. We have been investing R&D in SSD across our complete portfolio for some time now; it's not something we just recently added on our roadmaps due to competitor's movement in this space.


With all that being said, I sure would like to hear from you if you recently deployed an SSD in your disk array and whether or not it met your expectations. Or let me if know you are considering SSD in the very near future, too.


And don't hold your breath EMC, your prediction for SSD drives in 2010 will just be like "Tape is Dead" - Wrong!

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us


About the Author(s)
  • 25+ years experience around HP Storage. The go-to guy for news and views on all things storage..
  • This profile is for team blog articles posted. See the Byline of the article to see who specifically wrote the article.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation