The Next Big Thing
Posts about next generation technologies and their effect on business.

Can an agile approach make a client interact more?

cooperate.pngI was recently talking with a team of people who are supporting a client that seems to be reluctant to dedicate the time necessary to ensure that the requirements are defined properly and that the test cases actually test how the system will be used. When interactions did occur it didn’t see like the focus was on the value the system can deliver, instead it was on minutia related to the design...

 

This work has been going on for a while, and although work is being done and progress made there is a gnawing concern that the solution may never be accepted.

 

Rather than allowing this to continue, the team is now proposing a more agile approach. This is going to require significantly more involvement from the client and move testing and requirements validation from something that is done at the end of release development to something that is done every day.

 

I think anyone who has worked in the development space will likely feel that this arrangement is better for reducing rework… but is it really going to change the behavior of those involved? If the agile shift just raises a flag about a lack of customer involvement earlier in the interaction, that will be helpful – but the behavior of the development team (and its leadership) will need to change. If they didn’t address the interaction before, having the same concern raised more often may not make a difference… What do you think?

Start thinking about HTTP 2.0 early

http.pngOne of the changes on the horizon that I’ve not paid too much attention to but will impact the services space is: HTTP 2.0. Most organizations today are using HTTP 1.1 and since that dates back to 1999, it is getting rather long in the tooth.

 

Some of the areas trying to be addressed in this update are performance and security.  There are efforts underway today (like SPDY) to improve the existing HTTP, and these are only recently being supported by some of the mainstream browsers. The foundation they defined is being used though to move forward.

 

If the standards effort progresses as defined, HTTP 2.0 will be faster, safer, and be more efficient than HTTP 1.1. Most of these changes will actually take place behind the scenes, so for the user they will upgrade their browser and have HTTP 2.0 capabilities and have to wait for the servers to provide the improved functionality.

For companies though capabilities like server push, enabling HTTP servers to send multiple responses (in parallel) for a single client request -- significant improvements are possible.

 

 

“An average page requires dozens of additional assets, such as JavaScript, CSS, and images, and references to all of these assets are embedded in the very HTML that the server is producing!”

 

So for HTTP interfaces, instead of waiting for the client to discover references to needed resources, the server could sent all of them immediately as soon as you know they’ll be needed. Server push can eliminate entire roundtrips of unnecessary network latency. With user interface responsiveness being an important satisfaction criteria for users this could be a differentiator for a service, especially if it is turned to the network bandwidth available.

 

For businesses, there is a bit of work to do, since porting environments between HTTP servers requires a great deal of testing, even if you have not yet architected a solution to newer functionality. Microsoft and others have put out new server software so organizations can get their feet wet now, while the standards are still solidifying.

Who is required for effective knowledge sharing activities… and other lessons learned

knowledge management.pngAs I mentioned last week I presented on the importance of understanding Attention in business to the New Horizons Forum , part of the American Institute of Aeronautics and Astronautics (AIAA) conference. I put the Attention Scarcity in a Connected World presentation out in slideshare, if anyone is interested.

 

The nice thing about going to a conference outside your normal area of concentration is that it allows you to look at things differently. One thing that caught me a bit by surprise at the conference was the degree of overlap with the concepts presented in the New Horizons’ keynote titled "Big Bets in USAF Research and Development" by Maj General William Neil McCasland, Commander, Air Force Research.

 

Much of his presentation was about the impact autonomous and semi-autonomous systems were having on the military and the shifts that need to take place in both the implementation, validation and testing of these systems, as well as the processes that surround them. Granted he was coming at the problem from a different perspective and was focused much more on the automation side than the interaction between the humans and automation, but he touched on many of the same points as my brief presentation.

 

These overlaps drove home the “perfect storm” that is taking place in automation, regardless of the industry. Many people realize that the tools are out there and have different perspective of what the tools can do. These differences are actually what innovators need to look out for, since in many cases they can complement an approach. Even when they are not complimentary, the lessons learned may still be applicable.

 

The panel I was part of at the conference was moderated by Rupak Biswas, NASA Advanced Supercomputing division chief. After our panel, we had a long discussion about the shifting role and capabilities of automation, behavior modification and the role of IT organizations within organizations.

 

One of the areas we discussed was the use and deployment of gamification within an organization, specifically related to knowledge management and sharing of expertise. Although the IT organization definitely needs to be involved in the integration of information and its flow related to knowledge management and collaboration tools, the business side needs to be responsible for the goals, metrics, rewards and behavior changes that are required. They are the ones who will judge success of the project.

 

Collaboration between these two groups will be required, since neither can accomplish the task effectively on their own. This may seem obvious, but since some organizations view the IT team as a more cost conscious, support organization and that core business process tasks need to be funded and attacked separately from the IT efforts, this isolationist view may be a luxury that is too expensive to maintain.

Software Engineer a protected title in Texas – is this the start of something new?

development.pngIn Texas, the role of a engineer is actually licensed – by law. The testing is now extending to software engineering as well. This means the legal use of terms like “engineer” and “engineered” are protected.

 

The new NCEES Software Engineering PE exam is ready and will be offered for the first time in April 2013.  Everyone practicing software engineering in Texas is encouraged to register for the exam

There has always been some concern about the cavalier way the term engineer is used – especially in the area of software. In many parts of the world, the borders for these terms are more clear cut than in others. Having clear definitions should have a wide impact, especially in this world of XaaS and high job turnover.

Walking through a POD

prod-shot_pod_170x190.pngI had a few minutes to kill and thought “Is there anything I should blog about?” Then suddenly I realized I walked through one of the HP compute pods yesterday and didn’t even mention it on here.

 

The concept of compute pods have been around for a while, but for HP they are definitely entering the mainstream. Performance optimized data centers (PODS) have some real benefits when compared to having to build a new bricks and mortar data center. It sounds like they have quite a few under construction at any one time and they're cranking them out on a regular basis to locations all over the world.

 

There are a number of different sizes of containerized data centers as well as a nearly infinite number of configurations. I walked through the smallest and most modern variation. They pulled it into the parking lot where it was hooked up to an external generator. I am not sure what they did for the network connection.

 

PODs have a few major advantages:

 

1. Rapid optimized growth - New data center space that can be up and running in a fraction of the time of traditional data center build-outs

  • The HP POD can ship in as little as 6 weeks
  • Factory Express provides services for complete rack integration, testing and installation in the POD
  • The HP POD can ship directly to a customer site already fully loaded with integrated and tested racks
  • Add PODs as you need additional data center capacity saving up-front capital expenditures
  • Rapid deployment and commissioning at the customer site.

2. High density - The HP POD is optimized to efficiently support high density IT deployments

  • Provides 700kW per sq ft power capacity
  • Delivers the equivalent of 5,000 sq ft of data center space in a 40ft/12m HP POD
  • Average 27kW per 50U rack (max rack capacity 34kW)
  • Designed to support up to five c7000 Blade Enclosures per rack

3. Flexible design - The flexible HP POD design supports a variety of different applications

  • Available in 20ft/6m and 40ft/12m sizes
  • Industry standard racks support HP and other brands of IT designed for front-to-back airflow
  • Dual path power busway design allows for either redundant or non-redundant configurations
  • Wide variety of options to allow customers to customize features
  • Weatherized design allows for installation either outside or inside a shelter

4. Energy efficiency - HP POD is designed for state of the art data center energy efficiency

  • Power Usage Effectiveness (PUE) ratio as low as 1.25
  • Allows for warmer cold aisle temperatures due to closely-coupled cooling design
  • No ducting or under floor routing required for cold air circulation
  • Optimized for high efficiency delivering 3-phase power through the HP POD and 240V within racks
Search
Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels