Cloud Source Blog
In This HP Cloud Source Blog, HP Expert, Christian Verstraete will examine cloud computing challenges, discuss practical approaches to cloud computing and suggest realistic solutions.

Complement The machine with a distributed mesh cloud, welcome to the future

Mesh.pngWhen the HP 9100A was introduced in 1968 it was called a “calculator.” As Bill Hewlett explained, “If we had called it a computer, it would have been rejected by our customers' computer gurus, because it didn't look like an IBM. We therefore decided to call it a calculator, and all such nonsense disappeared.”

 

The Machine, introduced last week at HP Discover, shares an unusual name for similar reasons. It’s not a server in the usual sense of the term. It’s actually something really new, and as such, deserves a new name. As Martin Fink pointed out during his keynote presentation, HPLabs does not have a Marketing Department, hence a real down-to-earth name.

 

Listen to Martin explaining it in his own words.

 

In my last blog entry, I described “The Machine,” and explained why it is so revolutionary. But where things really start to become interesting is when you look at how The Machine can transform the Cloud, and address the future needs we have in the Big Data space. If you’re following this space, you will know there is debate as to where the Internet of Things data should be treated, and what the implications of that are on the Internet. A couple weeks ago, I actually wrote a blog entry titled, “Cloud, Fog, is the future of IT at the edge or the center?” where I conclude that one size does not fit all. Analyzing the end-to-end architecture will be key in understanding what choice to make.

 

The Machine gives all of us a lot of flexibility in designing how to go after such an issue. Actually, combined with another research currently going on in HPLabs, and called Distributed Mesh Cloud (DMC) —or Distributed Mesh Computing, depending upon who you ask—The Machine allows us to choose the best approach for each problem we try to address.

 

A mesh network

A mesh network is a network topology in which each node (called a mesh node) relays data for the network. All nodes cooperate in distribution of data in the network. Think about this: Although cloud refers to a fluffy, ever-changing object, Cloud Computing relies on physical datacenters in defined locations, interlinked with each other. And Clouds are subdivided in geographical entities and availability zones. Sure, you may not be fully aware of that, but if you happen to have two servers in different zones or geographies of a public Cloud, you will clearly see the difference on your bill.

 

What if we use the concept of a mesh network topology for the Cloud? Each Machine out there participates in the Cloud, and operates one or several functions within that mesh network. Machines on the edge collect raw data, store it and perform a first level of translation and/or analysis. In doing so, they may become aggregators, which deliver summary data to another mesh node more in the center. That node can, in turn, aggregate summary data from multiple first-line

Machines, coordinate them, and make yet more summary information available to the next Machine. And we can go on.

 

When I need to analyze some data, I’ll typically start with a holistic view. I’ll be interested in a high-level view of what’s happening. I’ll soon identify what I’m actually interested in, and zoom into that information. The different Machines that contain the summary and detailed information will provide me with the data required for me to perform further analysis, or may actually perform that analysis on my behalf, reducing the amount of internet traffic required. And this will happen completely transparently to me.

 

Advanced Storage Technologies

Do we have examples of how such things could work? Actually yes! The latest storage technologies, such as 3PAR, include active mesh architectures, and autonomic computing capabilities. This means that multiple tiers of storage can be linked together to keep your data in the most efficient way, transparently to you. It also means that if something goes wrong somewhere, the system can heal itself. This is what you want in such mesh Cloud, isn’t it? If one node goes down, you don’t even want to be made aware of it.

 

Wireless Mesh networks

Now, let’s combine this with a third element, wireless mesh networks. The infrastructure is, in effect, a network of routers with no cables between them. It’s built of peer radio devices that don’t need to have to be cabled to a wired port like traditional WLAN access points. Mesh infrastructure carries data over large distances by splitting the distance into a series of short hops. Intermediate nodes not only boost the signal, but cooperatively pass data from point A to point B by making forwarding decisions based on their knowledge of the network, i.e. intermediate nodes perform routing.

Such an architecture may, with careful design, provide high bandwidth, spectral efficiency, and economic advantage over the coverage area. So, information hops from one node to the other. If a node disappears, the others take over.

 

I remember work done at HPLabs years ago, where they looked at this technology to warn cars of a problem on the road. The information would hop from one car to the next car down a line. Each car would take action and convey the information to the next car. If there was a big gap, the information would no longer be conveyed, but that was not needed, because the next car was far away.

 

Let’s now pull all this together and come up with a couple scenarios how this distributed mesh computer could work.

 

Mayday, mayday, is something happening

A Boeing 737 flying from coast to coast keeps around 31KB of data that can be analyzed at the next airport. It actually generates gigabytes of data during the flight, but only keeps a small amount. The reasoning here is the other data is deemed normal. Only known issues are kept. The problem is, in many situations unpredictable things happened. But because the information leading up to it was not recorded, we have difficulty understanding how things happened. That’s why you have flight recorders and black boxes. Let me now take an example using The Machine.

 

Let’s assume a plane, for whatever reason, leaves its planned route. The Machine, monitoring all plane information, quickly spots the unplanned departure. It continues to store its data, but because of the unusual situation, it starts communicating past and present information to a control center. It does that by using satellite internet, but also through a mesh network storing data onboard other planes that are in the area and with whom The Machine is already in contact.

 

All that information is now available at the control center and they can properly assess the situation. Using the multitude of communication channels available, the information is accessible in real time. If for any reason something really bad happens with the plane, all data is available and the analysis can immediately start. There is no need to try and find the black boxes.

 

My safe line in my T-shirt

It happens—unfortunately too often—that elderly people pass away because their neighbors and family do not realize something has happened. So, how could we keep them safe? Well, the nature of The Machine (its low power-consumption level, and its storage capability), would allow us to integrate data processing in a T-shirt or other piece of clothing. Sensors could monitor vital signals constantly, could share them with doctors, and identify if something goes wrong. We already have glimpses of that with current fitness trackers. However fitness trackers need to communicate via Bluetooth and mobile phones. Again, using the mesh cloud approach, communication could be guaranteed nearly everywhere, and lives could be saved.

 

And I’m not even talking about the opportunity for doctors to monitor vital statistics, and compare them with other patients anywhere in the world to allow faster diagnosis and treatment improvements.

 

Addressing the data explosion

These are just two examples. I have a bunch of others. I hope you understand why I’m getting so excited. The Machine, and its environment, allows us to address many of the issues that are coming down the line with the explosion of data. Such tools may allow us to understand the big picture and not be buried in petabytes of data that don’t seem to make sense. Our lives will continue to evolve. Sure, many aspects remain to be discussed, not the least of which are privacy and security. But we’re out for a bright and interesting future, I believe.

 

What do you think? How would you use tools such as The Machine and distributed mesh computing?

 

I’m looking forward to reading your suggestions.

 

Labels: cloud| CloudSource
Leave a Comment

We encourage you to share your comments on this post. Comments are moderated and will be reviewed
and posted as promptly as possible during regular business hours

To ensure your comment is published, be sure to follow the community guidelines.

Be sure to enter a unique name. You can't reuse a name that's already in use.
Be sure to enter a unique email address. You can't reuse an email address that's already in use.
Type the characters you see in the picture above.Type the words you hear.
Search
About the Author
Christian is responsible for building services focused on advising clients in their move to cloud, particularly from a business process and ...


Follow Us
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation