Guest blog written by Brad Mayes, Systems Engineer, HP Industry Standard Server Division
It’s official—the proliferation of compute devices is staggering; with compute pretty much everywhere you look these days – just a bit different than when I was a teen. When I got my first home computer, an Atari 800, I was the only person in my neighborhood that had such a “luxury”, but today compute seemingly resides in every aspect of our lives. Think about it - your television, phone, car, motorcycle, heck, even your tennis shoes may have compute in them. Your house likely automatically turns the heat or air on as needed. Many of you may use a GPS location tracking app on your smart phone to keep a “close eye” on your kids’ location. Do you update photos to your favorite social media site right after you capture them? If so, then it only makes sense that as our expectation grows for our applications to continue to follow us everywhere we go, the idea of “Cloud Computing” must also grow with it.
But for an application to follow us, we will need a different compute model.
Here’s how it works today. Compute management is very hierarchical - it looks a lot like how most ETHERNET networks are architected. You have a core (mainframe), a distribution layer and an access layer. This model has worked very well for many years because there were some pretty safe assumptions you could make at the time it was created. For example:
- Workloads are predictable
- 95% of traffic is North/South
- The core is highly intelligent and I can make my routing decisions at that point
Unfortunately, these assumptions can no longer be made. The proliferation of compute means that my loads are no longer predictable. Instead, they are mobile and move around from place to place. Different apps can go viral at a moment’s notice, and then loose popularity just as quickly. Expectations of instant on and always available means that I need the ability to move the application data closer to the end user. This creates havoc with the idea that most of my network traffic is North/South because I now generate a large amount of East/West traffic as well.
As well, the ETHERNET networking world is ever so slowly changing. Software Defined Networking (SDN) is becoming a more popular point of discussion, with support products just starting to come on to the market, so you will soon start to see implementations of SDN become more common.
TIP: Openflow.org is a good place to go for more information on this subject!
The most basic premise behind the idea of SDN is that I can move the intelligence I need down to the edge, eliminating the need for a highly intelligent core. I like to call it a “village of idiots” (Can you believe that marketing is not fond of this naming?!) because I have “just enough” intelligence on the edge, close to the application.
The same basic idea needs to occur on the compute side.
Compute, or let’s say, the Teraflops, are a commodity. What is not a commodity is the management of that compute node. Because compute is so inexpensive, and because you now need more information about the health of the compute node than you ever needed before, every compute node needs intelligence outside the production band. In effect, every compute node needs a compute node.
If I’m able to dynamically expand and or reduce resources, move the resource to its most efficient host, or even maintain my guaranteed service level, I absolutely must be able to manage the compute. I need to be able to send a variable string to the compute nodes management node and have it tell me if it can meet that request or not. Does it have the processing power, memory foot print or network band width I am requesting? What about its health? Of course, I want to use all of my resources in the most efficient way possible. What servers can I turn off? Where is the most power effective location for my application? Where do I get the best performance? Here’s an example:
Vendor “A” just had a recall on their DRAM’s. What server has them and when is their scheduled down time? Have I applied the needed patches for that application? These questions - and a thousand more -are the questions that need to be asked and answered in a dynamic environment. To make that environment work though it is a computer that must query a computer, not an admin querying a database. In other words, it must be fully automated and the application needs to understand its requirements.
So as in the network scenario, in the compute world, we must also move the intelligence close to the application or at the edge of compute. We have to have the state of management unite the villages of compute, the village of networks, and the villages of storage.
To learn more about improvements HP brings to ‘smart’ computing as well as a host of additional breakthroughs that can help you transform your data center economics, read: Four ways to achieve a self-sufficient infrastructure.