Today, I'll give you an overview of the X9000 IBRIX Network Storage System with Patrick Osborne. Whenever I have X9000 questions, Patrick is the expert I turn to. He was with IBRIX four years and now has been working on the solution for 6 years.
By Calvin Zito, aka @HPStorageGuy
My colleague Marc Farley has a very good screencast giving his analysis of EMC's proposed acquisition of Isilon. It's titled "What was surprising about EMC's bid for Isilon". It sounds like an episode of "Lost". Anyway, I just wanted to point you to Marc's video. I had a recent article on the same topic of EMC buying Isilon a couple of weeks ago too if you didn't already see that.
Hope those of you in the U.S. are having a great Thanksgiving weekend! Oh, and Go Boise State!
By Calvin Zito, @HPStorageGuy
I wanted to congratulate our HP StorageWorks NAS team. Recently, the editors of Storage Magazine/ SearchStorage.com issued their Quality Awards for NAS Systems. HP StorageWorks beat out NetApp, and their three year run, and took the award for Midrange NAS. The X1000 and X3000 Network Storage Systems, formerly the ProLiant Storage Servers and All-in-One Storage Systems, ranked #1 in five of the six categories that included product features, reliability, sales-force competence, technical support and initial product quality. HP's quality hardware, our partnership with Microsoft and HP's own Automated Storage Manager (ASM) software has made for a feature-rich and customer friendly NAS appliance.
As for the Enterprise NAS results, where our Clustered Gateway (recently renamed the X5000), 4400 Scalable NAS and ExDS9100 (recently replaced by the X9720 Network Storage System) competed, HP led in the categories of technical support, and repeat business with 97% of respondents saying they would buy our products again and in the overall rankings we came in a close 3rd behind IBM and NetApp. The results of the fourth cycle of the NAS Quality Awards were derived from a survey of 340 qualified readers who rated 626 products/product lines.
We would like to send a big thank you to our customers for participating in the survey and rating us so well. Please check out the survey results and see what customers had to say about our products.
You can read the Storage Magazine Quality Awards for NAS Systems article by clicking here. Here's also a link to a PDF version of the ranking categories that is a bit easier to read.
By Ian Selway, WW Solution Marketing Manager
So as day 5 is really only a short day, I’ve waited to send my day 4 comments and I’m combining the summary of days 4 and 5 here at TechEd Berlin. Overnight the Wednesday to the Thursday, our stand designers added a whole bunch of balloons to celebrate the 20th anniversary of HP’s ProLiant range of servers. All day we also had cupcakes on the stand that not only reminded delegates of the occasion but also helped attract folks to the stand. I’ve attached a picture of the HP stand all decked out with balloons. Of course the other big news from overnight was the announcement of HP’s intention to acquire 3Com. We had expected a lot of questions from those attending. I’m not sure what the experience of others was, but I don’t think (aside from HP folks asking) that I had a single question asked. I guess everyone is so busy focusing on the bigger things happening between Microsoft and HP.
A lot of our customers who came out with us last night stopped by to tell us what a great time they had and we even found customers who had been pure server customers who came across to the storage section to look at the MDS 600. It seems word has spread about the flexibility, scalability and cost of these units and I spent a lot of time on Thursday talking about zoned storage for HP BladeSystem. When we were talking about deploying MDS 600 for Exchange Server 2010 mailboxes at under $2/GB, customers again told us how great this would be for deploying larger mailboxes. Several customers told us that they liked the idea that HP could offer a choice of storage and that we weren’t tied to only offering DAS or SAN but could offer both. I spent a lot of time directing our visitors to our eco-friendly collateral site so they could download the whitepapers, best practice guides and sizer information. The other great thing about being here at TechEd is the ability to meet with the key developers from Microsoft. I had three really good interactions with Microsoft on Thursday and I think when I get back to the US our engineering and marketing teams can really start demonstrating why Microsoft is going to be so good deployed on HP’s Converged Infrastructure.
Day 5 at TechEd was a lot shorter, and that was very welcome given what an active week it had been so far. A number of us from the StorageWorks team again supported the Microsoft virtualization kiosk over in their hall. For anyone who’s visited these types of shows, you’ll know the last day seems spent handing out all those trinkets the vendors bring to these shows, swapping your ‘swag’ for other vendors swag, and making sure you don’t have to carry any of it back. It never ceases to amaze me what delegates want to take home or back to the office. We did a huge trade in ‘Magic 8-Ball’ key rings. As soon as they went up on the booth, they disappeared as if by magic…. That said, it did enable us to engage in conversation as we attempted to address any of those last minute questions. Just after lunch, TechEd Berlin 2009 closed and it was time to close the booth, dismantle the demonstration and take time to reflect on the past week.
Everyone I spoke with from HP agreed we had a great week. I think we managed to register well over 1,000 attendees or 13% of the total registered visitors. We took a significant number of customers through our NDA room, and most importantly, we demonstrated just how HP Converged Infrastructure comes together to be such an impressive platform for deploying and running Microsoft applications such as Exchange Server 2010 and Windows Server 2008 R2. We presented our StorageWorks integration capabilities with Hyper-V to over 600 people and demonstrated our broader storage offering to a significant number of delegates. Thanks must go to the European organizers including John Stewardson, Till Stimberg and others too numerous to mention, employees from the worldwide business groups who helped support and to De Umphry from the events team. Here’s looking forward to TechEd EMEA 2010… May it be as successful as this year.
By Dirk Kunselman, Product Manager
If I ask you to tell me the first thing that comes to mind when I mention NAS, you might reference high-end (and expensive) file serving appliances. Or you might mention consumer-class devices that are becoming more prevalent at your local electronics retailer. Or you might say that you know it's like a SAN, only different. You might even reveal that he's your favorite rapper. Oops, you lost me.
Fact is, NAS (Network Attached Storage) is often misunderstood and more frequently underappreciated. Most folks associate it with files, but as NAS has evolved, it's taken on more than just file protocols and print services. The term is now almost synonymous with (and sometimes even replaced by) unified-or combined file and block-storage. It's a great story, especially for small environments: why just network and consolidate one type of data when you can serve files for your clients and blocks for your servers all from the same storage system?
That's where the new HP StorageWorks X1000 and X3000 Network Storage Systems come in. They're NAS devices, yes; but moreover, they're unified storage systems since they all include an iSCSI Target standard. An X1000 model can be that single storage consolidation platform by itself, while an X3000 Gateway can turn an existing array or SAN into a unified consolidation solution (utilizing both SAN and Ethernet connections) by adding IP-based file and/or iSCSI protocols to it.
NAS isn't just about sharing files any more--it's about sharing information.
By Carol Kemp
In case you missed it, HP came out with several new products and solutions designed for SMBs to help them survive and thrive during this tough economy. To read the full press release, click here. Headlining the announcement are new storage consolidation solutions designed to help SMBs do more with less - less money and less time...
Think of the advantages of having one network storage device that doesn't care what kind of data you're storing, whether you have more files than you can handle or block-based application data including gigantic databases. Think of the convenience of having one device to help you manage, protect and grow your business as your data grows. The new HP StorageWorks X1000 and X3000 Network Storage Systems enable customers to easily manage and optimize their storage capacity by consolidating file and block data into a unified storage solution. These new solutions increase performance by up to 30 percent over previous HP file and unified storage solutions. As a result, companies can reduce costs and simplify data management by improving capacity optimization of storage resources.
For SMBs who are making their first move into shared storage from direct attached storage, their requirements won't be the same as enterprise customers and in fact most SMBs will seek to reduce the level of complexity usually associated with installing their first SAN. The HP StorageWorks 2000sa and 2000i G2 Modular Smart Arrays are designed to provide low cost and high performance entry level SAN storage. These solutions deliver 33 percent more storage capacity per unit of rack space for SFF SAS drives over 3.5" drives. The capability of using small or large form factor SAS or SATA drives, combined with a scalability of up to 99 SFF drives is unmatched by ANY entry-level array product in the market. The MSA2000sa G2, with its' four SAS ports per controller or eight per dual controller system allows the customer who is moving from a single-host DAS environment enjoy the benefits and economies of a SAN without building a switched infrastructure. Four dual-path or eight single-path hosts can directly access a single MSA2000sa G2.
What if HP combined everything you need to realize the full potential of virtualization into one package? The HP Virtualization Bundles are the industry's first integrated solution that allows SMBs to more easily and cost-effectively deploy virtualization by transforming existing server disk drives into highly available shared storage. The bundle allows customers to reduce costs up to 35 percent by eliminating the need for external storage to support application high availability. The bundle includes HP server, storage and networking technology as well as integrated virtualization software from HP and VMware. To take advantage of some of the most critical features in VSphere (HA, VMotion, SRM, Live Migration), customers need to implement highly available storage area networks (SANs). For many midsized customers this is a challenge as adding external storage increases their hardware costs and impacts overall IT management. The HP Virtualization bundles allow an organization to pool all of the storage that is either inside or directly attached to HP ProLiant G6 servers into one virtualized pool of storage. Combined with HP ProCurve data center networking solutions, this allows an organization to quickly implement a fully virtual infrastructure, with all of its benefits, using pre-tested building blocks.
To get more information, go to our announcement day page on hp.com.
(Editor's note: Carol is my colleague and is the category manager for StorageWorks with an SMB focus)
By Jim Haberkorn
I'm leaving this Friday for a one month holiday so I'm going to try and wrap up this NetApp usable capacity issue as much as I can over the next two days. I'll try to answer all the comments I can before I leave but some may have to wait until mid-January when I'm back.
So, the question of the day is: how does NetApp get away with this usable capacity issue? It's a long story but I'll try to keep it short. NetApp has a unique technology. Along with that uniqueness comes both strengths and weaknesses. Now, to make it clear, no one is saying that you can't fill up NetApp filers to 99% capacity if you want to. What's being said is that if you do, your filer's behavior is going to change dramatically from when it was first installed. Whether you notice the change in your environment or not depends on a number of factors, but the filer's behavior will change over time in the vast majority of environments. The issue is hard wired into their design.
The issue can be hard to pin down because it is partially tied to performance, and this factor tends to play out differently in NAS and SAN environments. So, if you listen carefully to the discussions on NetApp user groups you might notice that the people who are saying they haven't seen a problem are mainly NAS customers and the ones who are complaining are mainly from SAN environments. There's a reason for this.
In your average NAS environment if your word.doc file took 3 seconds to download today and takes 5 seconds tomorrow, does anyone complain? No. Therefore, in many NAS environments, though the filer's performance may degrade by 40% or more over time for a variety of reasons, as long as it stays above the customer's pain threshold then no one even notices. In block/SAN/database environments it's a different story. In those environments performance and maintaining free space on a NetApp filer are crucial. NetApp talks about adding free space for ‘chaotic workloads' - a clever name that implies there is something aberrant about those environments. In fact, they are merely referring to the random workloads found in almost all SAN environments. Those environments need more free space and when they don't get it, bad things happen - at the very least performance degrades, and if the filer should actually ever run out of free space then the data base crashes and the file systems and source LUNs become inoperable. To the best of my knowledge, this behavior does not happen with any other array on the market.
Skeptical? Check out the paragraph marked ‘caution' on page 25 of http://media.netapp.com/documents/tr-3431.pdf. Or try this experiment with your NetApp filer. It will work 100% of the time. Start with a fresh NetApp filer with all the default settings in place and create a 40 spindle raid-dp aggregate with a 1TB volume and a 500GB LUN. And then let IOmeter run random writes to the LUN for a few hours. The LUN will run fine and then suddenly start to throw millions of errors then crash and be taken off line. The only way to prevent this is to drill down into the GUI and turn off the default auto-snap schedule.
Now, I have actually read statements by NetApp bloggers that NetApp LUNs ‘DO NOT' run out of free space. The actual quote I am referring to stated that ‘LUNs' in NetApp filers don't run out of free space because of the NetApp LUN auto-grow and snapshot auto-delete features which NetApp added several years ago (and I assume also because of a rarely talked about ‘automatic dismount of database' feature). But what about aggregates running out of freespace? Could that happen? No? Then why does NetApp have a separate best practice for aggregate free space? But the reality is that despite LUN auto-grow and snap-auto-delete, LUNs can run out of space on NetApp filers. What if you run out of auto-grow space? What if you've got no more snaps to delete? What if a customer isn't running snaps? What if you are running dedupe on the primary volume and people change bytes in their files and the files undedupe themselves? What if the host application creates lots of files? One point to ponder: NetApp places a lot of emphasis on snapshots, especially for restores. To recommend deleting snapshots to solve a problem tells you something about the seriousness of the problem.
So how does NetApp hide all this? Well, one way is if they have a small SAN running on the same filer as a large NAS. With all the free space running around no one really tracks whether the SAN is being a free space hog.
Another way is to throw disks at the problem - not for the purpose of increasing spindle count but for the purpose of increasing free space. Which is an okay fix in my opinion as long as the customers are told the issue upfront. Typically though it is a surprise. Either the customer grits his teeth and keeps buying more disks to constantly maintain the original free space levels, or if they are really angry and were savvy enough to require performance guarantees ( now there is a NetApp guarantee program with some teeth in it!! ) NetApp will give them the disks for free. It's a solvable problem in many cases, but it involves an unforeseen cost by the customer in both disks, power and floor space requirements, and if they have to upgrade to bigger filers, then in software license fees as well. Trivia question: How many of you think EMC has the highest gross margins in the storage industry (among the major players)? They' don't. In most quarters it's NetApp. Last time I checked NetApp software gross margins were 96% and hardware was 48%. Note: if those numbers have plunged recently for NetApp, then I am open to being corrected.
-by Michael J. Callahan (Note: reposted to fix formatting problems)
1). You mention PolyServe as part of the solution--what exactly is PolyServe and what does it do?
PolyServe is a shared data clustering software product that runs on Linux or Windows. It provides three key capabilities:
- Cluster-wide storage access: a symmetric cluster file system and cluster volume manager that allows all the servers in the cluster to access files in a single shared pool of storage directly
- High-availability: automatic monitoring of hardware and software health, and failover in the event of problems, based on administrator-specified policies
- Cluster-wide administration of storage resources and software services
These core capabilities provide the foundation for the ExDS platform. The cluster file system allows all 4 to 16 blades in the system to access the data stored in all storage blocks simultaneously; this means there's no need to partition data manually among blades, makes it possible to scale out performance across multiple blades, and is the basis for fast fail-over. The high-availability and cluster-wide administration infrastructure make it easy to provide services such as NFS, CIFS or HTTP across the blades and to do so in a way that is resilient against hardware or software failure.
In fact, while I mention services such as NFS, CIFS and HTTP, PolyServe can support many different kinds of applications running on top of it. In the past, we've built solutions on top of the core PolyServe technology for building consolidated, scalable, highly-available file servers -- (which ExDS will incorporate) and consolidated, easily-managed, mission critical database clusters.
However, it's also possible for users to put their own applications on top of a PolyServe environment. The PolyServe file system looks just like any other file system from the application perspective and uses the same APIs. For ExDS, this means it is possible to run user-provided applications on the blades within the system. These applications can easily be made highly-available using the mechanisms I mention above. I'll talk more about this capability in a future post.
2) Why does the ExDS use Linux as its operating system, rather than Windows Storage Server?
As I described above, the PolyServe software itself runs on both Linux and Windows. In fact, we've done a lot of work specifically targeting the Windows environment, with a particular focus on supporting Microsoft SQL Server about which I also hope to blog in more depth in future. However, we've found that the majority of customers seeking very large-scale storage solutions like ExDS are deploying services on Linux, so that's the platform we've used for ExDS.
3) Why does ExDS use new 'storage blocks' rather than traditional arrays like MSA or EVA?
As I mentioned in my first post, one of the key design principles for ExDS is to be as cost-, space- and power-efficient as possible while delivering a very large amount of storage. The hardware we're using for the storage block is strongly optimized for these attributes: for example, each storage block provides 82 1TB drives in only 7 rack units of space, which is far denser than traditional arrays.
Thanks for the questions!
Michael Callahan, Chief Technologist
HP StorageWorks Scalable NAS Division