The Next Big Thing
Posts about next generation technologies and their effect on business.

Automating programming in a self-aware enterprise

 

AI.pngThere was an interesting article in NewScientist about a new approach to providing computing capabilities, computers with human-like learning that will program themselves. They talk about new approaches for computers to program themselves.

 

Earlier this year when ‘the machine’ was announced at HP Discover, this scenario was one of the first things that came to mind, since memristors can be used to provide neuron-like behavior. When you have universal memory whole new possibilities open up. When I saw the NewScientist article, it did make me think about a number of applications in the enterprise, since these techniques will be as far beyond today’s cognitive computing as today’s approach is from the mainframe.

 

Always bet on the machine is in a post from 2008, that was contemplating the future of development. What I probably meant was: those who learn to work with the machine will still have a career.

 

I’ve mentioned before that much of today’s management function is ripe for automation. With approaches like this, an enterprise autopilot is conceivable that can optimize a businesses’ response to normal business situations. Questions probably has more to do with ‘when’ than ‘if’.

 

Another look at the potential of memristors…

memristor.jpgThis week, I was listening to the Security Now podcast, and Steve Gibson (@sggrc - the host) went off on a discussion on the potential of memristor or RRam technology (episode 466 – search for memristor in the notes).

 

HP’s current focus on taking advantage of this potential is something I blogged about in ‘The machine’ video.

 

It was interesting to hear his perspective about the capabilities (this post is a few years old but gets the point across from my perspective).

Will we need to update Moore’s law with the coming innovations...?

memory.jpgI was in an exchange with someone this week where we were talking about technology advances and some of the exponential ‘laws’ of IT. One of those is Moore’s law of transistor density. It is definitely useful, but maybe transistor density no longer has the meaning or breadth of relevance it used to?

 

For storage, it can take many transistors to store a one or a zero. But with Memristor or some of the other technologies that will someday compete with today’s memory circuits, they will not use transistors. Will we need to move to the density of 'holding either a zero or a one' instead?

 

Storage is just the start of the shift in computing circuits that are likely.

What's the future of data center efficiency?

 

efficient data center.pngI was in an interesting discussion today with a couple of people writing up a paper for a college class on the data efficiency trends for data centers going forward. Although this is not necessarily my core area of expertise, I always have an opinion.

 

I thought there are a few major trends in tackling the data center efficiency issue:

1)      The data center:

  • For existing data centers, ensure that the hot/cold aisle approach is used wherever possible to maximize the efficient flow of air to cool the equipment.
  • For new data centers, look to place them in areas that take advantage of the natural environment and make them more efficient. This was done with the Winyard data center at HP. It is also why organizations look to move data center to Iceland (why else would you place a data center on top of a volcano).
  • There is also the perspective of “Why have a traditional data center at all?” Why not go with a containerized approach – like the HP EcoPod.

2)      For the hardware within the data center there are also a number of approaches (here or on the horizon):

  • Going DC only inside the walls of the datacenter. Every step down of voltage is an opportunity for reduced efficiency. Minimize where and how it takes place.
  • Using the appropriate processor architecture for the job. This is what the Moonshot effort is trying to address. Don’t waste cycles through underutilization.
  • Why waste all the power spinning disk drives that are rarely accessed. One of the valuable applications of memristor technology is to provide high performing yet very power efficient data storage and access. It’s not out yet but soon.

I am sure there are a number of other areas I didn’t talk with them about, what are they?

 

One I thought I had while writing this that is a bit different than the goal of the paper but important to business is the role of application portfolio management. Why waste energy on applications that are actually not generating value?

 

Solid state storage and our future

solidstate.pngFlash memory was once viewed as special tool to improve performance or allow for easy transportation of information (e.g., thumb drive – I can’t recall the last time I gave someone a CD, let alone a floppy drive). Now flash memory devices are a standard component of any storage performance strategy.

 

As the Solid State Drive (SSD) came on the scene, it was used as a plug replacement for spinning media hard drives, providing better performance, but the characteristics of an SSD are actually quite different. The storage industry has only now started to design storage systems that take advantage of the differences in flash memory.

 

The Flash Translation Layer (FTL) translates the typical hard drive block-device commands and structure into comparable operations in flash memory. FTL is really a compromise for compatibility, since there is no need for the block and sector structure in flash. Additionally, the SSD controllers must perform a number of additional functions such as garbage collection, write amplification, wear leveling, and error correction, since the writeable life span of each storage cell of flash is limited (although there is discussion of a cure to this long-time flash illness). We’re going to see more applications that skip the need for FTL and take direct advantage of flash’s direct memory access capabilities.

 

High performance software capabilities such as databases currently circumvent the Operating System file system to attain optimal performance. Modern file systems such as Write Anywhere File Layout (WAFL), ZFS (which used to stand for the Zettabyte File System), and B-tree file system (Btrfs)are designed to take advantage of the various storage medias capabilities. The resulting systems were more efficient and easier to manage.

 

Storage system performance was a concern when operations were measured in milliseconds. It matters more on flash devices, whose operations are measured in microseconds. Future technologies like Memristor that will be even faster demand and optimized approach to long term storage and access of information. Compromises for convenience will exist but the penalties in performance will be high, impacting the application portfolio of organizations.

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us
Featured
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.