The Next Big Thing
Posts about next generation technologies and their effect on business.

Will we need to update Moore’s law with the coming innovations...?

memory.jpgI was in an exchange with someone this week where we were talking about technology advances and some of the exponential ‘laws’ of IT. One of those is Moore’s law of transistor density. It is definitely useful, but maybe transistor density no longer has the meaning or breadth of relevance it used to?

 

For storage, it can take many transistors to store a one or a zero. But with Memristor or some of the other technologies that will someday compete with today’s memory circuits, they will not use transistors. Will we need to move to the density of 'holding either a zero or a one' instead?

 

Storage is just the start of the shift in computing circuits that are likely.

What's the future of data center efficiency?

 

efficient data center.pngI was in an interesting discussion today with a couple of people writing up a paper for a college class on the data efficiency trends for data centers going forward. Although this is not necessarily my core area of expertise, I always have an opinion.

 

I thought there are a few major trends in tackling the data center efficiency issue:

1)      The data center:

  • For existing data centers, ensure that the hot/cold aisle approach is used wherever possible to maximize the efficient flow of air to cool the equipment.
  • For new data centers, look to place them in areas that take advantage of the natural environment and make them more efficient. This was done with the Winyard data center at HP. It is also why organizations look to move data center to Iceland (why else would you place a data center on top of a volcano).
  • There is also the perspective of “Why have a traditional data center at all?” Why not go with a containerized approach – like the HP EcoPod.

2)      For the hardware within the data center there are also a number of approaches (here or on the horizon):

  • Going DC only inside the walls of the datacenter. Every step down of voltage is an opportunity for reduced efficiency. Minimize where and how it takes place.
  • Using the appropriate processor architecture for the job. This is what the Moonshot effort is trying to address. Don’t waste cycles through underutilization.
  • Why waste all the power spinning disk drives that are rarely accessed. One of the valuable applications of memristor technology is to provide high performing yet very power efficient data storage and access. It’s not out yet but soon.

I am sure there are a number of other areas I didn’t talk with them about, what are they?

 

One I thought I had while writing this that is a bit different than the goal of the paper but important to business is the role of application portfolio management. Why waste energy on applications that are actually not generating value?

 

Solid state storage and our future

solidstate.pngFlash memory was once viewed as special tool to improve performance or allow for easy transportation of information (e.g., thumb drive – I can’t recall the last time I gave someone a CD, let alone a floppy drive). Now flash memory devices are a standard component of any storage performance strategy.

 

As the Solid State Drive (SSD) came on the scene, it was used as a plug replacement for spinning media hard drives, providing better performance, but the characteristics of an SSD are actually quite different. The storage industry has only now started to design storage systems that take advantage of the differences in flash memory.

 

The Flash Translation Layer (FTL) translates the typical hard drive block-device commands and structure into comparable operations in flash memory. FTL is really a compromise for compatibility, since there is no need for the block and sector structure in flash. Additionally, the SSD controllers must perform a number of additional functions such as garbage collection, write amplification, wear leveling, and error correction, since the writeable life span of each storage cell of flash is limited (although there is discussion of a cure to this long-time flash illness). We’re going to see more applications that skip the need for FTL and take direct advantage of flash’s direct memory access capabilities.

 

High performance software capabilities such as databases currently circumvent the Operating System file system to attain optimal performance. Modern file systems such as Write Anywhere File Layout (WAFL), ZFS (which used to stand for the Zettabyte File System), and B-tree file system (Btrfs)are designed to take advantage of the various storage medias capabilities. The resulting systems were more efficient and easier to manage.

 

Storage system performance was a concern when operations were measured in milliseconds. It matters more on flash devices, whose operations are measured in microseconds. Future technologies like Memristor that will be even faster demand and optimized approach to long term storage and access of information. Compromises for convenience will exist but the penalties in performance will be high, impacting the application portfolio of organizations.

Researching shifts in computing in a data abundant world

I mentioned that I was giving a presentation this week at the New Horizons Forum at the AIAA conference. Since it may provide some useful insight about the research underway at HP labs in a larger context, here is the content of one slide from that presentation:line picture.png

1 datum is a point

2 data are a line

3 data are a trend

100 data are a picture

 

Having sensors to generate the data that fuels a more proactive business is important, but there is more to sensing than the sensors and the data collected. A holistic ecosystem view is needed. Unfortunately, this means that the tools of today may not be up to the tasks required.

 

You may have heard about HP’s efforts to place a million node sensor network in the ground for Shell, gathering seismic information. Traditionally, this kind of information was just a flash of perspective taken in the dark from a few locations. Instead, this sensing effort with Shell generated a much more fine-grained view, taken from a myriad of angles, to understand in-depth what was underground. 

 

In order to do implement the system, HP not only had to invent the sensors (relatively cheap and yet very sensitive MEMS devices), but we also create the networking and management techniques to make it useful. Building upon what we’ve learned, we’ve been researching whole new approaches to information storage and computation that will be required to generate value from massive amounts of information.

 

HP has many of the foundational patents on memristor devices and sensing techniques and we should soon see the shift in storage and computing that the implementation of these techniques should enable. The whole concept of computing will likely need to bow to the onslaught of information from sensing and the related metadata, changing how information is transferred within the computing environment -- shifting from computing on bits to analyzing information in graphs on highly parallelizes computing platforms: Cog Ex Machina

 

In addition, research is underway to understand how information can be analyzed, automated and displayed. New techniques can be applied to focus attention on the areas needing the creativity that people can provide.

 

In the marketplace, last year was the year of Big Data as a buzzword with its primary focus on generating insight from the massive amounts of information being collected. Frankly, that will not be enough for the future envisioned – we need to shift the focus to time-to-action, not insight and that is what many of our research efforts underway will enable.

New Horizons for taking IT lessons into Aerospace

Next week, I am going to be part of a panel at the New Horizons Forum that is part of the American Institute of Aeronautics and Astronautics (AIAA) conference. The panel is going to be focused on: Information Technology – Spin-On Technology for Aeronautics and Space. Essentially leveraging efforts into Aeronautics and space.

 

I’ll have a few minutes to present where I’ll focus on the shift in how we will use people and computing in the future and talk through some examples. In many organizations have so much data coming in that there is no way that any organization can effectively consume it, using today’s techniques.

 

I’ll use a simple model to show how things are changing and where HP labs is focusing research. The model is that one datum is a point, 2 data are a line and three are a trend. 100 data points are a picture or pattern. When you can move beyond calculations at the bit level and actions in isolation, whole new levels of capability open up.

 

Today, we think about working with data primarily in isolation. We need to start thinking about storing massive amounts of data (using devices like memristors), computing on the patterns using different techniques (like Cog Ex Machina) and using whole new approaches to focus the attention of those who need to act upon the context described by the data (like automation and gamification).

 

A holistic shift in how we approach sensing will be needed:

sensing.png

When I think about the work done with HP and Shell to gather detailed seismic information for oil detection and production, it makes me wonder about possibilities of families of sensors gathering data for planetary exploration. It also makes me wonder about using crowd sourcing techniques to tease out meaning from the unique items that pattern matching on all that data can identify.

 

It should be an interesting panel, for the audience and the panelists.

Search
Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation