By Melissa Saegert Elicker, editor, HP Storage
It’s that time of year – time to take a minute to look ahead and see what might be coming down the road in 2013. I was lucky enough to get to talk with some of our storage gurus about their predictions for the coming year. . . and beyond. Here’s what they think about some of the hot storage topics out there. Some observations you might expect. Others you might not. See what our experts have to say—and let us know what you think is heading our way this new year.
1. Software-defined storage
Looking in to 2013, we see a greater interest in software-define storage. That is, storage that is characterized by software that allows you to support traditional and emerging enterprise application storage workload requirements yet can also be deployed on independent, off-the-shelf, industry-standard hardware. Software-defined storage will be an increasingly key enabler for converged infrastructure, allowing applications and underlying storage services to share hardware resources.
The conversation about the software-defined storage and the software-defined data center is definitely ramping up. Alistair Veitch, HP Fellow and director, Storage and Information Management Platforms, HP Labs, sees two factors driving this trend: the increased processing power of commodity hardware and the realization that you can remove a lot of complexity by investing in control layers and data services based in software.
“I see more people emphasizing what they are doing in this space and looking at how they can combine different software services into storage,” he says.
Milan Shetti, vice president and CTO HP Storage, agrees. “Whether we’re talking about software-defined storage, networking or data center, that fact that this approach allows customers to use standard off-the-shelf hardware is really driving the trend.”
2. Big data - data management and analytics
Big data will continue to be a hot topic as information growth continues to increase dramatically. Evolving from its roots in the analytics space, big data means different things depending on whom you talk to. From our view, we’ll broaden the big data topic to refer to the explosion in unstructured data. This growth is driving two trends, Veitch thinks.
One is a rise in the software that lets you store, process or analyze big data in a multitude of customized ways. The second is the consolidation of legacy platforms coupled with the emergence of new scale-out platforms with more embedded functionality to efficiently deal with content at massive scale.
Consider the fact that the cost/GB of storage has gone down steadily but the total cost invested in storage is on an opposite trend. Platforms and software are evolving to help you more intelligently deal with the amount of data in a more cost-effective, easy-to-manage way, while at the same time increasing the ability to derive value from that information.“It’s a natural evolution, where some platforms thrive and others disappear,” says Veitch.
Fernando Lucini, chief architect, Autonomy, brings the big data discussion into the realm of information retention and governance, where companies are increasingly unhappy with the manual, complex process to deal with regulatory requirements or legal discovery actions.
“The trend we see is customers looking for tighter integration between the physical storage and the software to eliminate as much manual overhead as possible,” he says.
This discussion also extends to the need for dealing with data management at a massive scale from the production, protection, archive and disposition perspective, as well as the need to enable applications such as analytics.
3. Intelligent storage
As a natural corollary to the big data trend, Lucini predicts that bringing intelligence into storage is what customers want. In many cases, this means pulling data services from application software closer to the physical storage layer.
‘What if it takes three weeks to go through billions of files with an analytics tool? And what if instead the storage layer could tell you what’s new in five seconds,” he says. “CIOs will be demanding more from storage in regards to intelligence, recovery and data protection.”
It’s fair to say that traditionally storage or the storage interface has not been that smart. Give it a block of data and it stores it for you. What’s interesting now, according to Veitch, is the move to use data through increased use of metadata along with greater integration policy management and information tiering software.
“HP is differentiating our systems as we add more services on top of the existing storage,” he says. “For example, HP Express Query with HP StoreAll integrates search into storage and in turn into meaning-based computing engines such as Autonomy.”
Veitch does see some potential for tension between storage vendors and ISVs looking to offer these kinds of services. “There will be a meeting in the middle at some point,” he observes.
4. Solid-state disk, flash and non-volatile memory
Shetti predicts that flash will continue to be hot in 2013, both within SSDs and as a memory technology. “Interestingly, flash is really not displacing or substituting anything. It’s just another tier in the memory hierarchy,” he says.
Expounding on the flash trend, Siamak Nazari, HP Fellow and architect for HP 3PAR StoreServ Storage, focuses on what having flash cache on the server itself will mean to nature of the storage. “Going forward, we can’t ignore the fundamental desire for resiliency to server failure,” he says. “That’s why we’ll continue to see external storage in the mix. No company can ever get into the situation where you don't have external storage.”
This increasingly nuanced memory hierarchy reveals that the nature of storage media is going to continue to disrupt physical storage architectures that are not able to deal with the unique requirements of flash. Granular page sizes, rapid network paths and the ability to accelerate certain data services via hardware will play an important role moving forward.
Flash cache in servers coupled with increased cache in storage platforms as well as SSD optimized array platforms enables unheard levels of performance and is going to force the continued convergence of servers and storage.
Veitch notes that discussions on the advantages and disadvantages of flash seem to quickly turn to non-volatile memory technologies such as memristor.
“We need new APIs to use this, with some change, but not radical top-to-bottom change. We will have compatibility with old APIs so you can still manipulate files or objects. There will be more talk on this in 2013 and 2014 and real impact by 2015,” he says.
5. Object storage and the cloud
Everything toward the cloud? Lucini thinksthat’s where things will continue to head, with more enterprise software made for the cloud. “It is everything toward the cloud—with cloud fast becoming the strongest purchasing option for customers,” he says.
When it comes to cloud computing, Shetti expects much more integration between public and private cloud, with greater data mobility between the two. This requires standard interfaces such as those promoted by OpenStack , as well as a coupling with an increasing amount of data being accessed via RestAPI commands and object interfaces versus traditional block or file access methods.
“Customers may choose to test an app or create something new on the public cloud but will at some point want to bring the app and data set back into the data center, so the need for a ready on-and-off ramp between public and private cloud will become increasingly prevalent,” Shetti says.
6. Enterprise sync and share: storage and mobility
This is one trend that’s here to stay—with people wanting even seamless environments across home and work, desktops, phones and tablets. Whatever you are doing, wherever you are, you want to be able to pick up what you were working on one place without sending data back and forth.
“We’ll see businesses responding to user demand for data storage services similar to DropBox,“ Veitch says. “And it will drive the need for addressing concerns around information governance and security.”
Many enterprises are seeing this issue today as you recognize that employees are using public cloud providers to exchange large files or move data from disconnected devices to those on the corporate network. The workforce of today is a generation used to ubiquitous access to peers and data. this trend is all about sharing.
The challenge for IT is to provide an environment that promotes sharing to increase productivity and lower costs but at the same time maintains control of corporate data. The emergence of enterprise sync-and-share applications and the corresponding back-end storage infrastructure will accelerate over the next year.
7. The data center network of the future
With the increasing adoption of virtualization in data centers. we’ve seen a substantial change in the type of traffic traversing the corporate network. More data traveling within a rack versus out to the edge has forced changes in the traditional multi-layer networks of the past. Flat Ethernet and Flat SAN architectures are impacting how IT deploys new workloads. We are also seeing software-defined networks becoming an increasingly prominent part of the data center conversation.
For storage, it’s traditionally been Fibre Channel providing the network connecting the host to storage. But what about tomorrow?
Nazari points out that as long as you had some mechanical storage device, you did not need to worry about the latency of Fibre Channel. But with SSDs, throughput and latency is high, and data reading is electronic. So what does mean for the storage fabric of the future?
“Do you look at adopting a technology like Infiniband, or do we go to a new fabric?” he asks. “And does Fibre Channel evolve to take on the properties it needs to? These are questions we’ll see asked more frequently in the near future.”
8. Storage management orchestration
Management of complex infrastructure is something that has long been a pain point for IT administrators. The search for the mythical single pane of glass is something that vendors often talk about.
Virtualization and any kind of IT-as-a-Service have driven a generalization of IT expertise as well as different expectations related to provisioning and application mobility. A large reason for moving towards service-oriented IT models is to accelerate provisioning and simplify management through the abstraction of underlying resources.
The challenge, according to Jamey Robbins, director for HP’s converged management engineering efforts, is balancing the needs of specific management domains with the generalized orchestration that characterizes many cloud computing implementations.
“Managing data services, resiliency and capacity is the language of storage. We need to drive autonomic capabilities into the platform itself, enable more ubiquitous storage management, and provide a seamless quality-of-service to users, to the cloud, or to converged data center environments.”
Additionally, these tools need to be available where, when and how customers want it: tablets, smartphones, and, yes, the single pane of glass direction for data center users.
9. Quality of service
And speaking of quality of service. . . this is something that’s always out there on a roadmap slide. Moving ahead, it’s time to deliver on this in the context of assessing policies where certain apps or groups of applications get fixed bandwidth with a guaranteed SLA.
Shetti runs through this scenario: With an app for a multi-tenant cloud user, you would want to set specific policiesaround capacity and I/O performance—saying that this time this app needs hundreds of I/Os per second at certain latency. The expectation is that these polices be set in the storage product so that the system administrator does not need to be reclassifying data. For example, backup generally happens at night when workloads are less. So why not set up a policy at night when more bandwidth and storage are available?
“Quality of service opens up the pipe. This kind of automatic SLA—without system admin intervention—is a key component moving into 2013,” says Shetti. “Some of the older vendors may be suffering in this area. But it’s where HP 3PAR Storage is ahead with improvements in our latest releases.”
10. Polymorphic simplicity
The dictionary defines polymorphic in biology as something that occurs in many forms, shapes, and sizes. Storage architectures have become highly fragmented over the last 20 years as different companies each tackled a different piece of the problem landscape.
While adequate when seen in isolation, each of these disconnected parts when brought together creates a gridlock to manage and is extraordinarily expensive to acquire and maintain.
As a core design attribute of HP Converged Storage, polymorphism is applied in how a given storage technology can be applied pervasively in a number of different forms, shapes, and sizes. This is the opposite of the complex and divergent approach that exists in many datacenters today – where multiple technologies try to solve the same problem but each in a discordant way.
Veitch provides an example from work in HP Labs. “The deduplication algorithms that we developed exist as embedded code in our storage systems, in our backup software and distributed across a set of hardware nodes” he says. “ It was one technology that spawned many interrelated forms.”
With the massive growth in data and the increasingly complex network of applications both on-premise and in the cloud, it will become critically important to apply polymorphic design principles to other storage domains in order to solve today’s challenges and adapt to meet tomorrows needs without adding another storage silo.
More predictions and trends
Our storage experts continue the discussion: