The Next Big Thing
Posts about next generation technologies and their effect on business.

Displays value is in the eye of the user

 

eye.pngThere was a story a few weeks back that caught my eye but I didn’t have time to blog about it. It was about 3D printing contact lenses with built-in video. The concept of having sensors and displays directly on the eye is not new, but this is the first time I’ve seen discussion of them being 3D printed.

 

This particular effort is funded by the US Air Force and could be used for display of information or sensing the “state of the wearer's retina and possibly monitor pilot health without invasive implants.”

 

I can easily see these high impact/cost applications increasing in availability over time and being integrated with those roles where timely access to information can make a big difference. There will need to be some significant work on user interface design, since an on-eye display will be always in the way of the user’s vision.

 

The sensing application would be useful for those situation where immediate action could be the difference between life and death (for example diabetes intervention). I have a hard time imagining its use for every day service interactions, but I could easily be mistaken. It does make me wonder about the possibilities when integated with cognitive computing capabilities.

 

What should be the goal of cognitive computing?

automated decisions.pngSome organizations think that cognitive computing is about getting better answers more quickly, typically using English to form the questions. There is no doubt that there is tremendous appeal to getting the answer to question in natural language, but is that really enough. In a world of data abundance, it can be difficult to know the right question to ask.

 

Unfortunately, many times it is the questions we never knew to ask, that turn into potential big gains or losses. One of my co-workers from HP labs mentioned that:

“It is interesting to note that change detection is a core competency (and survival property) of the visual cortex; it responds quickly because it constantly compares visual input with memories of what the world should look like. Thus, as we build next-generation systems based on large amounts of rapidly changing data, you want the data to self-organize, recognize similarities, detect changes, and help you assess anomalies so that these may be investigated.”

 

In addition to systems, we need services that enable the decision maker (human or machine) to react, respond and investigate based on the context of the information available, so that the entire ecosystem learns and adapts. It could be that having the future approach focus on better questions than better answers and how to display those questions and their answers more effectively should be the goal.

 

When I talk to leaders about where the future of services is headed, this is where my thoughts tend to go and it is going to take different techniques than organizations have deployed today.

HP announces a blending of the physical and the virtual

 

sprout.pngHopefully, anyone who is interested in 3D printing saw the two announcements by HP yesterday. They focused on having a Blended Reality that will change how we interact with technology and the world around us.

 

The first announcement should clear up the long rumored entry by HP into 3D printing. This multi-jet fusion approach of ‘page-wide’ printing is significantly faster than traditional 3D extrusion based printing. It is also much more finely grained and accurate. I handled some of these prototype parts a while back and I found it very exciting, when compared to any of the 3D printing efforts I’ve done myself. The potential ability to manipulate color, finish and flexibility within the same part was something I found unique. HP has a very strong materials science foundation ever since HPs commercial definition of ink jet printing in the early 80s and this approach really takes advantage of that experience.

 

The other shoe that dropped was Sprout. This link has numerous movies about how others have used this technology in their work. I’ve seen somewhat similar techniques applied in research projects for a number of years now, but not a commercial solution that you can ‘just buy’ that integrates touch, 2 and 3D scanning and multiple displays in such a seamless and functional way. Although I have talked with people about this effort about a year ago, it is great to see it become a reality – and I’m anxious to get my hands into its platform. There are some interesting perspectives that if you do work that involves your hands it may be the computer for you and the view that it is a solution looking for a problem – I can see easily see its use.

 

One of the things I find most exciting about these products that they enable a different kind of creative environment that functions as a springboard for greater creativity. These sort of environmental enabling view will be an ever increasing part of new business value generation in the future.

 

The desktop expansion explosion

 

monitor explosion.pngWhen I first started working with WYSIWIG GUI displays, I had a Macintosh SE with a 512x342 display.

 

While doing programming with the Macintosh back in 1990, I had a black and white monitor and a color monitor both with 640x480. One was for debugging and one for the display. I thought I was living life large.

 

Yesterday I hooked up a 28” display running 3840x2160 that is sitting over my laptop display of 1366x768 -- that's quite a trend.

 

By just looking at the commercially available display size over the last 3 decade, it appears to be an exponential growth curve. This makes me wonder where display size will eventually end up? How many desktop pixels are enough??

 

There is no doubt in my mind (depending on what you do), the more desktop space available, the higher your productivity, since it reduces the tasks switching costs.

 

New vision for computing

eye.pngIEEE Spectrum had an article on moving display technology closer to the eye. Whether it is virtual reality goggles or contact lens enabled displays, it appears there is a great effort being applied to move displays closer than ever. The demonstration of a combined contact/glasses based display approach shows the level of innovation that is underway – not that I think that approach will be viable in the marketplace.

 

If you combine that with speech or gesture recognition, it leads to a technological approach that could be safer and more ubiquitous than what’s been done before. Naturally, there are some people who think that these displays are risky in certain circumstances.

 

Even as access to networking and computing permeate more of our business and personal lives, the display has been one dimension that has been holding back application in many domains. I can easily see a mechanic or others who hands are typically busy doing work using techniques like this to reference manuals… and facilitate decisions. Who knows if these techniques can be applied in a transparent and effective way, they could lead to the one display that is used by all the devices around us.

 

It makes me ask questions about how applications would change if this were available? What new business solutions are possible??

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us
Featured
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation.