The Next Big Thing
Posts about next generation technologies and their effect on business.

What’s the difference between SDN and NFV?

networking.jpgI was in a discussion the other day with someone focused on the networking services space and they kept using the acronym NFV, without really defining it. I dug in a bit and this is what I found.

 

Network Functions Virtualization aims to address the issue of having a large and increasing variety of proprietary hardware appliances. Its approach is to leverage standard IT virtualization technology to consolidate many types of network equipment onto industry standard high volume servers, switches and storage. These more standard device can be located in datacenters, network nodes or at end user premises. NFS is applicable to any data planepacket processing and control plane function in fixed and mobile network infrastructures. 

 

 

I’ve mentioned Software Defined Networking (SDN) in this blog before.  NFV and SDN are mutually beneficial but are not dependent on each other. That was one of the confusions I had during the initial conversation. NFV is focused on consolidating and reducing hardware costs. Although these devices could be virtualized and managed using techniques like SDN they don’t have to be.

 

The concepts of NFV are not really new. Even so, a more formalized approach with PoCs … will hopefully contribute to accelerating changes taking place in the communications industry allowing for reduced operational complexity, greater automation and self-provisioning – much like is happening in the cloud space (either through public or private techniques) for the rest of IT.

 

I just saw that Dave Larsen (of HP) put out a post about what HP is doing in both SDN and NFV, just as I was finishing up this post. Expect to see more about this when HP releases an HP Industry Edge e-zine devoted entirely to NFV, in the near future.  

HP ConvergedSystem 100 for Hosted Desktops

moonshot.jpgOne thing I am always looking for from a conference like HP Discover are products that might be underappreciated at first glance. The ConvergedSystem 100 may be one.

 

A while back HP came out with Moonshot. This massive application of computing capability was significantly cooler and more efficient than other options in the marketplace. It had one big issue, commercial software vendors didn’t have a licensing model that aligned with the number of cores this box could bring to bear on a problem. So for most organizations, the choices were to either write the software yourself, or use open source.

 

Now there is a solution that takes advantage of the cartridge approach used in Moonshot to tackle a problem that many organization have. The need for a low cost, no compromise PC experience. This solution (with the m700 cartridge) provides up to 6x faster graphics frames per second than similar solutions, up to 90% faster deployment (e.g., up and running in about 2 hours with Citrix XenDesktop and no SAN or virtualization layer to complicate things). It also has 44% better total cost of ownership, while consuming 63% less power.

 

You combine that with the HP t410 All-in-One, power over Ethernet thin-client solution and there are some real power savings and flexibility possibilities. 

HP Storage announcements at Discover

I don’t do posts about HP product announcements, but since this the week of HP Discover, why not. There definitly are some changes in the wind for storage.

 

One of the first things out of the shoot today at HP Discover was a whole new set of capability in the area of software defined storage. With all the options in storage today (SSD, low end, backup…) the complexity of managing the environment is reducing the flexibility of organizations. Storage solutions are becoming complex, inefficient and rigid.

 

storage shifts.png

The family of converged storage solutions announced today should help address many of these issues by providing a single architecture enabling a flexible and extensible approach that embraces block, object and file storage of devices ranging from HDDs to SSD/flash, providing:

  • Performance acceleration – eliminating system bottlenecks
  • Efficiency optimization – extending the life and utilization of existing media
  • System resiliency – providing constant information/application access
  • Data mobility – federating across systems and sites

With a range of HP 3PAR StoreServ solutions ranging from the low end 7200 to high performance (StoreServ 7450) or high scaling 10800 – all running from a single architecture and interface.

 

The 7450 that was announced today is:

  • Accelerated:  Over 500,000 IOPS and less than 0.6 ms latency proving a massively parallelized architecture, flash-optimized cache algorithms and QoS
  • Efficient:  Reduce capacity requirements up to 50% and extend flash lifespan delivering a multi-layered and fine-grained virtualization with hardware-accelerated data compaction
  • Bulletproof:  Eliminate downtime for performance-critical applications with a Quad-controller design with persistent cache and ports, peer failover, and multi-site replication
  • Futureproof:  Allowing organizations to move data seamlessly to balance performance and cost enabling a simple solution across Tier 1, midrange, and flash with federated data mobility

Also announced was HP StoreOnce VSA, a software defined approach to information protection enabling a more agile and efficient approach to remote information protection.

 

The breadth of these announcements should enable greater flexibility for organizations that maintain their own infrastructure.

Network Fabric for Cloud

Flexfabric.pngToday, HP Launches Industry’s Most Complete Software-defined Network Fabric for Cloud. This network fabric is built on HP FlexNetwork architecture, enabling business agility for clients by delivering two times greater scalability and 75 percent less complexity over current network fabrics while reducing network provisioning time from months to minutes.

 

This is possible by:

  • Improving IT productivity by unifying the virtual and physical fabric with new HP FlexFabric Virtual Switch 5900v software, which, in conjunction with the HP FlexFabric 5900 physical switch, delivers advanced networking functionalities such as policies and quality of service to a VMware environment. Integrated Virtual Ethernet Port Aggregator (VEPA) technology provides clear separation between server and network administrations to deliver operational simplicity.
  • Reducing data center footprint with the HP Virtualized Services Router (VSR), which allows services to be delivered on a virtual machine (VM), eliminating unnecessary hardware, by leveraging the industry's first carrier-class software-based Network Function Virtualization (NFV).

As organizations move to software defined networks, some fundamental changes in the approach will be required and these products are a start down that path. Here is a video with a bit more high level discussion and some details:

 

Do you need a technical bucket list?

list.pngI wrote a post about what a technologists can do to be relevant a while back and at the time I thought that a list like this would be relatively transient. It turns out that unlike buzzwords, the underlying technologies are usually here for the long haul -- just ask a COBOL programmer. The half-life of the experience is likely much longer than I thought.

 

I was in a discussion today where we talked about a list of experiences a technologist needs to have in order to talk with some degree of authority about the next big thing in an enterprise context. Naturally, a person can’t know everything to the same level of depth, but there is a basic, useful level for every strategic technologist to have.

Some of the obvious ones I’ve mentioned before were:

  • Install a public cloud-based virtual machine and use it for something
  • Write an application for a mobile device and get the app listed in the app store
  • Take an on-line class (or maybe a couple every year) through a tool like coursera

A couple of those items would have been as applicable 2-3 years ago as they are today. Some have changed quite radically in their capability in that timeframe. I’ve done each of them at least twice for one reason or another and each time I learned something new.

 

If I were to add a new one that I haven’t touched in a very long time, it would likely be something to do analytics. There is a bit of a problem with this one though, since having enough data to do something useful and interesting may be tough.

 

I mentioned I was going to experiment with 3D printing. I now need to find something in the Internet-of-Things space as well.

 

I’ve probably looked at all these things enough to understand what their good for, but actually tackling a project brings that perspective to a whole other level. The hands-on experience doesn’t need to be production ready quality, since the goal is as much generating the exposure to the issues and ideas as it is solving a particular problem.

 

What other areas should a technologist tackle? And how? I haven’t even mentioned anything in the networking space. Anyone who has looked under the covers of Software Defined Networks probably knows the depth of impact changes in this space will have for the future.

 

The book Outliers talked about spending 10,000 hours on an area to become great. I wonder how tackling 400 technology domain experiences allows you to be successful - that’s 10 a year for 40 years.

Search
Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels