Reality Check: Server Insights
Get HP server news, technology trends, and product information to stay up to date with what is happening in the server world.

What’s the future of data center networks? Where does client virtualization fit in?

Data centers today—and where the problems lie

 

Most data centers today have a three- or four-tier hierarchical networking structure. It consists of access layer switches (at the server-network edge), aggregation switches and core switches Three-tier networking architectures were designed around client-server applications and single-purpose application servers. Client-server applications caused traffic to flow primarily in north/south (N/S) patterns: from a server up to the data center core, to the campus core where it moves out to the campus-wide network or Internet. The vast majority of the intelligence in the network might be focused in these large core switches.

 

However, this data center network architecture is becoming problematic. Today’s applications are more distributed and oriented toward service delivery. These application changes result in:

 

  • Greater network traffic volume
  • More complex data relationships between application servers
  • Greater traffic flow between peer servers such as server-to-server virtual machine-to-virtual machine east/west (E/W) rather than primarily N/S traffic flows

We suggest that you consider adopting technologies that foster E/W traffic flow and that optimize the server/network edge by distributing intelligent management capabilities to the edge rather than focusing all the management at the core.

 

Changing business applications

 

The continued growth of server virtualization (virtual machines, VMs), virtualized desktop infrastructures (VDI), cloud-computing models, federated or distributed applications and mobile access devices are all causing shifts in networking traffic patterns toward more E/W traffic flow (Table 1). Industry sources attribute up to 80% of network traffic for these next generation applications coming from E/W traffic flows.[1] 

 

We expect web applications to deliver integrated, context-specific information and services. And, we expect it right now—low-latency, high performance connections are critical. At the same time, cloud computing and service-oriented applications are introducing more stringent service level and security demands.

 

A winning client virtualization and BladeSystem scenario

 

A specialized type of virtual machine (VM) is client virtualization technology such as virtual desktop infrastructure (VDI). With VDI, a client desktop is created as a VM. However, the VDI instance is more than a simple VM. It includes the real-time compilation of the end user’s data, personal settings, and application settings with a core OS instance and a shared generic profile. The end-user applications are either installed locally as a fully packaged instance or streamed from outside the VM. Applications and user personality are injected into the core desktop VM. For example, a VDI can let users access a VM running Microsoft Windows XP OS, leave for the evening, and come back the next workday to a Microsoft Windows 7-based VM— with all their data, customized desktop settings, and customized application settings intact.

 

A standard VDI configuration would use rack-based servers distributed across the data center with top-of-rack (ToR) switches at the network edge. Network traffic from each rack of the distributed servers (for example, Microsoft Exchange servers, Active Directory servers, user application servers, and the VDI servers) and even storage would travel to its own ToR switch before traveling to the network core and then out to the client.

 

Our VDI reference architectures on HP BladeSystem with HP Virtual Connect (VC) hardware let the majority of this network traffic remain inside the VC domain as E/W traffic that never travels out to the network core (Figure 3). A smaller, well-defined amount of traffic for the connection protocol, user data access and management exits the VC domain.  This work to:

 

  • Optimize the E/W traffic
  • Minimize the need for expensive switch ports
  • Allow a single infrastructure administrator manage the intra-domain traffic
  • Improve performance and reliability by using mostly cable free internal connections between hosts and management services

Possible solutions for optimizing E/W traffic flow

This section describes some different architectures and technologies to consider when optimizing your data center structure for E/W traffic flows. These include fostering E/W traffic flow at the physical server/network edge, distributing management intelligence at the physical server/network edge rather than concentrating it at higher layers in the network, flattening the L2 network by using technologies like HP Intelligent Resilient Framework (IRF), or making your L2 network more efficient by implementing multi-path standards like TRILL or SPB.

 

Identify traffic bottlenecks

 

It is important to identify where the E/W traffic flows are occurring in your data center: at the physical switch/server edge between physical servers, or internal to the virtualized server (VM-to-VM). There are tradeoffs that you can make, depending on whether you want to optimize the E/W traffic flow by providing intelligent management at the physical switch/server edge or you want to optimize for performance inside a physical server between multiple VMs (with possible degradation of network management visibility and control).

 

Where HP Virtual Connect fits in

 

One of the ways to optimize the server edge for E/W traffic flow is by implementing HP Virtual Connect technology. Virtual Connect is a set of interconnect modules and embedded software for HP BladeSystem c- Class enclosures that provides server edge and I/O virtualization. It delivers direct server-to-server connectivity within an enclosure – especially important for latency sensitive, bandwidth intensive applications that we’ve been discussing. For example, as described in the Client Virtualization section, using BladeSystem with Virtual Connect lets you design an infrastructure that can optimize network traffic without leaving the enclosure.

Using Flex-10 technology with FlexFabric adapters lets you consolidate multiple network connections – data and storage traffic – onto a single 10 Gb Ethernet pipe. This lets you reduce the number of physical adapters, cables, switches, and required ports at the server edge.

 

You can also use stacking links with the VC Ethernet modules to allow all server NICs in the VC domain to have access to any VC-Ethernet uplink port. Using these module-to-module links, a single pair of uplinks serves as the data center network connections for the entire VC domain. This reduces the core switch traffic, as internal communication stays inside the virtual connect domain. The stacking links provide high-speed connections between the enclosures that can be adjusted by adding more physical links to the stack. By using the stacking links between VC modules, you can increase the server/edge bandwidth.

 

One size does not fit all—even in the same data center

 

As part of a Converged Infrastructure, virtualization, cloud computing, and federated applications can bring flexibility, scalability and cost advantages to your business. They can also significantly alter how network traffic flows in your data center. Our position is to support multiple data center options rather than forcing you down a proprietary path that may limit other choices in your infrastructure.

 

Keep in mind that portions of your data center, such as cloud infrastructure green field deployments, may require a high-performance, intelligent edge that supports lots of E/W traffic flow. Other parts of your data center may continue to operate with the traditional three-tier architecture.

 

Learn more

 

>> HP networking blog: Simplifying data center architecture

>> HP Advanced Data Center Network Architectures Reference Guide          

>> HP Virtual Desktop Infrastructure        

>> HP Converged Infrastructure  

  

Figure 1: A typical data center structure today uses three layers: access, aggregation, and core.

 

data center structure.PNG

 

The dotted line around the Blade server/Top of rack (ToR) switch indicates an optional layer, depending on whether the interconnect modules replace the ToR or add a tier.

 

Figure 2: Architectural overview of VDI

VDI.PNG

 

Figure 3. When using a VDI configuration Virtual Connect technology, only a small amount of production (protocol) and management traffic exit the VC domain, thus optimizing the E/W traffic flow.

 

Architecture_a.jpg

 

 

Table 1: New software applications are driving changes to networking infrastructure

 

Yesterday’s Applications

Applications for 2011 and beyond

Single application on single-purpose server

Multiple applications operating on VMs within a single physical server

Client- server architecture

Distributed computing applications (massively parallel compute clusters)

 

Cloud computing and new service delivery models

   Platform as a service (PaaS)

   Software as a service (SaaS)

   Infrastructure as a service (IaaS)

   “X” as a service (XaaS)

 

 

 



[1] Reference for 80% E/W traffic flow

 

Contributor  Doug Hart,

HP Client Virtualization Engineering

Labels: vmworld2011
Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the community guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
Showing results for 
Search instead for 
Do you mean 
About the Author


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation