The Next Big Thing
Posts about next generation technologies and their effect on business.

SDN - the foundation for the agile enterprise

I recently facilitated a discussion between several technical leaders in HP on the topic of SDN - and you can listen to it on-line. Much of the conversation centered on the differences between the relatively inflexible view of networking that everyone has been so successful with and the more dynamic possibly with SDN.


SDN and flexibility.png


Software define networks are a relatively recent, yet foundational technology innovation, changing how organizations should think about the value possibilities of networks and even networking. The role of SDN in an agile organization does not seem to be nearly as well understood as I thought it should be.


We’re used to talking about software-defined data centers, allowing for the dynamic reconfiguration of computing elements, turning processors off or reapplying them elsewhere when they are no longer needed. The networks to date have not been up to this dynamic level but SDN can address this.


When I consider of some of the innovations possible with SDN, I think back to the 1980s when organizations were first starting to embrace object-oriented programming. The thought that data and processing could be integrated was quite radical. Yet, quite common today. Object oriented techniques are only applied to data at rest, essentially when it was sitting within a system.


With SDN there is the possibility to integrate data on the move with processing – a quite different set of possibilities exist, but only if we grasp the potential and plan around the possibilities.




What's the future of data center efficiency?


efficient data center.pngI was in an interesting discussion today with a couple of people writing up a paper for a college class on the data efficiency trends for data centers going forward. Although this is not necessarily my core area of expertise, I always have an opinion.


I thought there are a few major trends in tackling the data center efficiency issue:

1)      The data center:

  • For existing data centers, ensure that the hot/cold aisle approach is used wherever possible to maximize the efficient flow of air to cool the equipment.
  • For new data centers, look to place them in areas that take advantage of the natural environment and make them more efficient. This was done with the Winyard data center at HP. It is also why organizations look to move data center to Iceland (why else would you place a data center on top of a volcano).
  • There is also the perspective of “Why have a traditional data center at all?” Why not go with a containerized approach – like the HP EcoPod.

2)      For the hardware within the data center there are also a number of approaches (here or on the horizon):

  • Going DC only inside the walls of the datacenter. Every step down of voltage is an opportunity for reduced efficiency. Minimize where and how it takes place.
  • Using the appropriate processor architecture for the job. This is what the Moonshot effort is trying to address. Don’t waste cycles through underutilization.
  • Why waste all the power spinning disk drives that are rarely accessed. One of the valuable applications of memristor technology is to provide high performing yet very power efficient data storage and access. It’s not out yet but soon.

I am sure there are a number of other areas I didn’t talk with them about, what are they?


One I thought I had while writing this that is a bit different than the goal of the paper but important to business is the role of application portfolio management. Why waste energy on applications that are actually not generating value?


Network Fabric for Cloud

Flexfabric.pngToday, HP Launches Industry’s Most Complete Software-defined Network Fabric for Cloud. This network fabric is built on HP FlexNetwork architecture, enabling business agility for clients by delivering two times greater scalability and 75 percent less complexity over current network fabrics while reducing network provisioning time from months to minutes.


This is possible by:

  • Improving IT productivity by unifying the virtual and physical fabric with new HP FlexFabric Virtual Switch 5900v software, which, in conjunction with the HP FlexFabric 5900 physical switch, delivers advanced networking functionalities such as policies and quality of service to a VMware environment. Integrated Virtual Ethernet Port Aggregator (VEPA) technology provides clear separation between server and network administrations to deliver operational simplicity.
  • Reducing data center footprint with the HP Virtualized Services Router (VSR), which allows services to be delivered on a virtual machine (VM), eliminating unnecessary hardware, by leveraging the industry's first carrier-class software-based Network Function Virtualization (NFV).

As organizations move to software defined networks, some fundamental changes in the approach will be required and these products are a start down that path. Here is a video with a bit more high level discussion and some details:


Finally, the launch of Moonshot

After talking about the technology for a long time, today HP announced a new class of server for Social, Mobile, Cloud and Big Data.



The HP Moonshot System is leap forward in infrastructure design that addresses the speed, scale, and specialization needed for a bold, new style of IT.

HP ProLiant Moonshot servers are designed and tailored for specific workloads to deliver optimum performance. The servers share management, power, cooling, networking, and storage. This architecture is key to achieving 8x efficiency at scale, and enabling 3x faster innovation cycle and bringing thousands of cores on target for projects. It uses 86% less energy, 80% less space, 77% less cost and is significantly, less complex to install and maintain.


After talking with other technologists, I believe that it is a start down a path that will change both how software is written and how solutions are envisioned. When I look at the initial product data sheet, I see a 4.3 U chassis that can hold up to 47 server cartridges. As the processing capability improves so can the cartridges. A full rack of these will replace the computational capability of whole data centers just a few years ago. Granted it excels at certain type of computing needs.


As the HP Pathfinder Innovation Ecosystem improves and continues to bring together leadings partners, a broader set of problems can be addressed:



This means having access to the latest technology and solutions at a groundbreaking, time-to-market pace measured in months rather than years. I can’t wait to see what next big thing will spring forth from this.

The value of hindsight, as disaster foresight

disaster2.pngWith all the examples of disasters with Sandy & blizzards… this week, it reminded me of the IT calamities I’ve encountered or discussed with people who were at the sharp end of the stick.


Recovery failures take place even for organizations that plan and test their disaster recovery plans on a regular basis. It is not a lack of preparedness (in the traditional sense) that catches organization off guard, but a lack of imagination about what can really go wrong. Some of the big problems I can recall were:

  • When 3 Mile Island took place, EDS had a large data center downwind in PA. What do you do when the National Guard comes in and tells you to drop everything and leave? You can’t get tapes. You can’t start jobs. You just have to leave. This makes you look at disaster plans a bit differently from then on.
  • During Katrina there were situations where an organization’s data center physically was OK, but there was no power and not likely to be any soon. The off-site backup is underwater. Some extreme measures had to be taken to fly someone into the data center and get the latest information and move it somewhere else for the processing to be performed. This situation can cause you to think differently about geographic diversity and how real-time is time enough?
  • Let’s not forget one of the issues that hit Fukushima was not that they didn’t have sufficient backup cooling, but that their pumps were flooded by the tsunami. Sometimes it may not just be one disaster you need to deal with.

It is the variety of personal experience and interaction with a network of experience that can be useful in pointing out flaws.


Once I was part of a team brought in to look at a client’s data centers. They were very proud of their disaster recovery center, until we pointed out that the failover site was in the flood plain below a dam, near an earthquake fault and also close to where wildfires regularly take place. It would likely be most vulnerable when they needed it. They’d never really thought about it quite that way before.


That’s one reason why the disaster situations you test for disaster planning should come from someone outside your normal organization. Shake it up, move it around, get some imagination through diverse perspectives, rather than just counting on someone to think up a good case.

Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.