Friday, March 03, 2006

When Software Attacks: Survival of the fittest

On the weekend I borrowed The Software Conspiracy from my local library. This was a book printed back in 2000 that focused on the crisis in the shrink-wrap market: users are more interested in features than in quality, the software companies know it, and there is no incentive (financial, legal or other) for the companies to change their ways.

I would argue that the pace of innovation has slowed down in the last few years, so some of those bugs are finally getting fixed. However, most computer users are now conditioned to accept poor quality software in both their work environment and home environment (I still get annoyed at having to reboot my set top box a few times a week, I'de be really upset if I had to reboot my cell phone).

In the book the author compares shrink-wrap software with software that powers the space shuttle:

...Unsure that a simple process could make the shuttle's computers reliable enough, NASA chose to just put redundent computer systems on the spacecraft... Instead of builing a computer program and putting an indentical version on five different computers, NASA gave the specifications for the shuttle software to five seperate teams and had them write seperate programs. Then, when the shuttle needs to do something like calculate orbital information, the seperate and independent computer systems all compute the oribital information, and then vote. Hopefully all systems agree, but if they don't, the majority rules...

I don't know if NASA still uses such an approach (they use a single operating system for their probes), but could this approach be used for developing software for other markets?

Maybe. Let's have a look at how this might be done.

Why would you take this approach?

  • You absolutely need a particular application - failure is not an option. You might need it to ensure future cashflows, might be a legal requirement, etc.
  • Reliability is critical.

How could you manage the development?

If a "majority wins" approach is used then you will need at least 3 different implementations, and definately an odd number of implementations. You could divide the effort amongst 3 different internal development groups (make sure they don't talk to eachother!), or contract the development to 3 external companies, or a combination of both. In-house develop the layer that decisions the responses from the competing systems (could be the UI layer of the application).

What are the pros?

After a short while (production release, plus another development cycle) it should be clear which of the implementations is most suited to that environment. You can collateral damage the failures and continue to deliver to the business...

What are the cons?

  • Increased cost.
  • Overhead of running a number of similiar systems in parallel.

Crazy? Maybe. The cost of such a development approach would be reason enough to kill off the idea in most markets. Still, if survival of the fittest is good enough for nature, isn't it good enough for corporates?

No comments: