The Next Big Thing
Posts about next generation technologies and their effect on business.

The need for automated environmental validation in IT

action 002.jpgI was recently reading the post When disaster strikes: How IT process automation helps you recover fast and it got me thinking about the need for automated environmental validation. Recovering fast may not be good enough if the recovery destination environment has changed.

 

In the software space, you can use jUnit or nUnit to codify the limits of the code and make sure it works, and breaks as defined. It can be a very useful component of a test first unit testing approach.

 

I was wondering if infrastructure automation efforts should include a similar capability that we can automatically test an environment to ensure its characteristics are up-to-snuff, before and after we make a change. These tests could either be run periodically, or as part of a promotion to production process.

 

Automating this validation would remove the human element from this tedious step. It seems like this would be a useful and possibly necessary step for cloud deployments, since those environments are dynamic and beyond the scope (and understanding) of the person who wrote the original programs. Maybe this is commonly done, but I've not talked to many who attack this issue proactively.

Agile development - is it right for you?

Barrier break through.pngThe other day I was talking with a team about Agile development adoption. One of the things they asked was “What’s different?” and "Is it right for us?" I sat down and jotted this list of thoughts that came to mind. No, my response isn’t the purist perspective covering all the elements of the agile manifesto, but some might find it useful:

 

Agile focus:

  • Small cycles with fast turnaround - iterative releases w/improved time to value

  • High-level of interaction with the end user/owner

  • Progress centric, where business value is measured by tested, demonstrable deliverables

  • Transparency

  • Fail early/fail often – “defects” are an opportunity for a future release

  • Priority focused, with quicker realization of business value

It requires:

  • Direct access by the agile team to business users

  • Executive sponsorship, support and expectations

  • Customer commitment/involvement – if the customer isn’t there, the work can’t get done.

  • Automated testing (you’ll be testing more often so plan to automate)

  • Projects started with the awareness and assumption that you don’t know everything about the end result

  • Define areas that cannot be compromised (e.g., security) that need to be baked in along the way

  • Agile readiness assessment before each new effort – all parties need to be ready

  • Organizational change management (there is behavior change of individuals, leaders, relationships)

  • Priority-driven development

What remains the same:

  • Scope (scope will change, but if it changes too much, it becomes a new project/sprint)

  • Requirements (still need to be documented and tested) although they are not there at the start

  • Leadership and project owners (they accept or reject results)

  • Amount of potential work to be done (there is always going to be more to do, focus on value)

  • Budgets – projects still need to live within financial constraints

  • Performance measurement (what and how you measure will be different).

  • Documentation (the documentation is richer, since it should be completed all along the way).

  • Architecture (everything still needs to work together)

  • Change control (prepare for continous change)

What needs to change:

  • Incentives, compensation for the members

  • Governance processes and approaches to release and acceptance

  • Interfaces to other groups (they need to be more flexible)

  • Coaches and agile experience (for the ‘customer’ as well as the developers)

  • Continuous engagement level of business users

  • A view to develop and implement what is needed, nothing more

Although Agile has been around for well over a decade, a solid foundation for discussion still needs to be agreed upon.

Can an agile approach make a client interact more?

cooperate.pngI was recently talking with a team of people who are supporting a client that seems to be reluctant to dedicate the time necessary to ensure that the requirements are defined properly and that the test cases actually test how the system will be used. When interactions did occur it didn’t see like the focus was on the value the system can deliver, instead it was on minutia related to the design...

 

This work has been going on for a while, and although work is being done and progress made there is a gnawing concern that the solution may never be accepted.

 

Rather than allowing this to continue, the team is now proposing a more agile approach. This is going to require significantly more involvement from the client and move testing and requirements validation from something that is done at the end of release development to something that is done every day.

 

I think anyone who has worked in the development space will likely feel that this arrangement is better for reducing rework… but is it really going to change the behavior of those involved? If the agile shift just raises a flag about a lack of customer involvement earlier in the interaction, that will be helpful – but the behavior of the development team (and its leadership) will need to change. If they didn’t address the interaction before, having the same concern raised more often may not make a difference… What do you think?

Start thinking about HTTP 2.0 early

http.pngOne of the changes on the horizon that I’ve not paid too much attention to but will impact the services space is: HTTP 2.0. Most organizations today are using HTTP 1.1 and since that dates back to 1999, it is getting rather long in the tooth.

 

Some of the areas trying to be addressed in this update are performance and security.  There are efforts underway today (like SPDY) to improve the existing HTTP, and these are only recently being supported by some of the mainstream browsers. The foundation they defined is being used though to move forward.

 

If the standards effort progresses as defined, HTTP 2.0 will be faster, safer, and be more efficient than HTTP 1.1. Most of these changes will actually take place behind the scenes, so for the user they will upgrade their browser and have HTTP 2.0 capabilities and have to wait for the servers to provide the improved functionality.

For companies though capabilities like server push, enabling HTTP servers to send multiple responses (in parallel) for a single client request -- significant improvements are possible.

 

 

“An average page requires dozens of additional assets, such as JavaScript, CSS, and images, and references to all of these assets are embedded in the very HTML that the server is producing!”

 

So for HTTP interfaces, instead of waiting for the client to discover references to needed resources, the server could sent all of them immediately as soon as you know they’ll be needed. Server push can eliminate entire roundtrips of unnecessary network latency. With user interface responsiveness being an important satisfaction criteria for users this could be a differentiator for a service, especially if it is turned to the network bandwidth available.

 

For businesses, there is a bit of work to do, since porting environments between HTTP servers requires a great deal of testing, even if you have not yet architected a solution to newer functionality. Microsoft and others have put out new server software so organizations can get their feet wet now, while the standards are still solidifying.

Who is required for effective knowledge sharing activities… and other lessons learned

knowledge management.pngAs I mentioned last week I presented on the importance of understanding Attention in business to the New Horizons Forum , part of the American Institute of Aeronautics and Astronautics (AIAA) conference. I put the Attention Scarcity in a Connected World presentation out in slideshare, if anyone is interested.

 

The nice thing about going to a conference outside your normal area of concentration is that it allows you to look at things differently. One thing that caught me a bit by surprise at the conference was the degree of overlap with the concepts presented in the New Horizons’ keynote titled "Big Bets in USAF Research and Development" by Maj General William Neil McCasland, Commander, Air Force Research.

 

Much of his presentation was about the impact autonomous and semi-autonomous systems were having on the military and the shifts that need to take place in both the implementation, validation and testing of these systems, as well as the processes that surround them. Granted he was coming at the problem from a different perspective and was focused much more on the automation side than the interaction between the humans and automation, but he touched on many of the same points as my brief presentation.

 

These overlaps drove home the “perfect storm” that is taking place in automation, regardless of the industry. Many people realize that the tools are out there and have different perspective of what the tools can do. These differences are actually what innovators need to look out for, since in many cases they can complement an approach. Even when they are not complimentary, the lessons learned may still be applicable.

 

The panel I was part of at the conference was moderated by Rupak Biswas, NASA Advanced Supercomputing division chief. After our panel, we had a long discussion about the shifting role and capabilities of automation, behavior modification and the role of IT organizations within organizations.

 

One of the areas we discussed was the use and deployment of gamification within an organization, specifically related to knowledge management and sharing of expertise. Although the IT organization definitely needs to be involved in the integration of information and its flow related to knowledge management and collaboration tools, the business side needs to be responsible for the goals, metrics, rewards and behavior changes that are required. They are the ones who will judge success of the project.

 

Collaboration between these two groups will be required, since neither can accomplish the task effectively on their own. This may seem obvious, but since some organizations view the IT team as a more cost conscious, support organization and that core business process tasks need to be funded and attacked separately from the IT efforts, this isolationist view may be a luxury that is too expensive to maintain.

Search
Showing results for 
Search instead for 
Do you mean 
Follow Us
About the Author(s)
  • Steve Simske is an HP Fellow and Director in the Printing and Content Delivery Lab in Hewlett-Packard Labs, and is the Director and Chief Technologist for the HP Labs Security Printing and Imaging program.
Labels
The opinions expressed above are the personal opinions of the authors, not of HP. By using this site, you accept the Terms of Use and Rules of Participation