Flash memory was once viewed as special tool to improve performance or allow for easy transportation of information (e.g., thumb drive – I can’t recall the last time I gave someone a CD, let alone a floppy drive). Now flash memory devices are a standard component of any storage performance strategy.
As the Solid State Drive (SSD) came on the scene, it was used as a plug replacement for spinning media hard drives, providing better performance, but the characteristics of an SSD are actually quite different. The storage industry has only now started to design storage systems that take advantage of the differences in flash memory.
The Flash Translation Layer (FTL) translates the typical hard drive block-device commands and structure into comparable operations in flash memory. FTL is really a compromise for compatibility, since there is no need for the block and sector structure in flash. Additionally, the SSD controllers must perform a number of additional functions such as garbage collection, write amplification, wear leveling, and error correction, since the writeable life span of each storage cell of flash is limited (although there is discussion of a cure to this long-time flash illness). We’re going to see more applications that skip the need for FTL and take direct advantage of flash’s direct memory access capabilities.
High performance software capabilities such as databases currently circumvent the Operating System file system to attain optimal performance. Modern file systems such as Write Anywhere File Layout (WAFL), ZFS (which used to stand for the Zettabyte File System), and B-tree file system (Btrfs)are designed to take advantage of the various storage medias capabilities. The resulting systems were more efficient and easier to manage.
Storage system performance was a concern when operations were measured in milliseconds. It matters more on flash devices, whose operations are measured in microseconds. Future technologies like Memristor that will be even faster demand and optimized approach to long term storage and access of information. Compromises for convenience will exist but the penalties in performance will be high, impacting the application portfolio of organizations.
The HP Moonshot System is leap forward in infrastructure design that addresses the speed, scale, and specialization needed for a bold, new style of IT.
HP ProLiant Moonshot servers are designed and tailored for specific workloads to deliver optimum performance. The servers share management, power, cooling, networking, and storage. This architecture is key to achieving 8x efficiency at scale, and enabling 3x faster innovation cycle and bringing thousands of cores on target for projects. It uses 86% less energy, 80% less space, 77% less cost and is significantly, less complex to install and maintain.
After talking with other technologists, I believe that it is a start down a path that will change both how software is written and how solutions are envisioned. When I look at the initial product data sheet, I see a 4.3 U chassis that can hold up to 47 server cartridges. As the processing capability improves so can the cartridges. A full rack of these will replace the computational capability of whole data centers just a few years ago. Granted it excels at certain type of computing needs.
As the HP Pathfinder Innovation Ecosystem improves and continues to bring together leadings partners, a broader set of problems can be addressed:
This means having access to the latest technology and solutions at a groundbreaking, time-to-market pace measured in months rather than years. I can’t wait to see what next big thing will spring forth from this.
I recently came across an HP labs video on the excitement of one of the researchers on next wave developing to compute and gather information.
It shows some of the efforts to be more efficient and yet more powerful. Innovation’s role is in resolving conflicts like this, and that’s exciting.
The whole industry is at a tipping point where new generations of capability will be arriving simultaneously for computing, storage, networking and sensing… which should allow for a novel, innovative dimension of applications and services to take advantage of the new abilities and generate new levels of business value.
I am old enough to remember when the first PC landed on my desk, as well as my first laptop, smartphone… now it is an assumed part of work today. It takes more than new technology to differentiate an organization.
I mentioned that I was giving a presentation this week at the New Horizons Forum at the AIAA conference. Since it may provide some useful insight about the research underway at HP labs in a larger context, here is the content of one slide from that presentation:
1 datum is a point
2 data are a line
3 data are a trend
100 data are a picture
Having sensors to generate the data that fuels a more proactive business is important, but there is more to sensing than the sensors and the data collected. A holistic ecosystem view is needed. Unfortunately, this means that the tools of today may not be up to the tasks required.
You may have heard about HP’s efforts to place a million node sensor network in the ground for Shell, gathering seismic information. Traditionally, this kind of information was just a flash of perspective taken in the dark from a few locations. Instead, this sensing effort with Shell generated a much more fine-grained view, taken from a myriad of angles, to understand in-depth what was underground.
In order to do implement the system, HP not only had to invent the sensors (relatively cheap and yet very sensitive MEMS devices), but we also create the networking and management techniques to make it useful. Building upon what we’ve learned, we’ve been researching whole new approaches to information storage and computation that will be required to generate value from massive amounts of information.
HP has many of the foundational patents on memristor devices and sensing techniques and we should soon see the shift in storage and computing that the implementation of these techniques should enable. The whole concept of computing will likely need to bow to the onslaught of information from sensing and the related metadata, changing how information is transferred within the computing environment -- shifting from computing on bits to analyzing information in graphs on highly parallelizes computing platforms: Cog Ex Machina
In addition, research is underway to understand how information can be analyzed, automated and displayed. New techniques can be applied to focus attention on the areas needing the creativity that people can provide.
In the marketplace, last year was the year of Big Data as a buzzword with its primary focus on generating insight from the massive amounts of information being collected. Frankly, that will not be enough for the future envisioned – we need to shift the focus to time-to-action, not insight and that is what many of our research efforts underway will enable.