Our automated test system has discovered bugs, often just before a big release, more times than I’d care to admit. It’s an invaluable tool and one recommended by most modern development methodologies. However, I’m not a fan of methodologies in general. Let me preface that with a little bit of personal history…
Back in 1994, while working for a bank in London, I was asked to come up with a specification standard that could be given to any old ‘monkey’ and would produce reliable results. Management were fed up with the code quality and productivity disparities through-out the development teams.
In my naivety I thought it could be done. If I could take out the hard bit of the problem, my thinking went, then surely the ‘easy’ bit could be given to anyone with even half a brain cell. However, once I started drawing up tables and diagrams I eventually realized that to cover all the details required it would be easier just to code the whole thing myself. Frederick Brooks describes the problem as the ‘essential’ difficulties as opposed to the ‘accidental’ difficulties in the software process:
http://www.virtualschool.edu/mon/SoftwareEngineering/BrooksNoSilverBullet.html
Before I had to admit failure the whole project was cancelled but that experience has hung with me ever since. It’s why I’m cynical of many development methodologies that attempt to remove the ‘essential’ difficulties of software design and dumb down the art of software development.
There’s much common sense in modern methodologies but each project/team needs to take a pragmatic approach to picking the best bits that remove as much of the ‘accidental’ difficulties as possible without stifling creativity and productivity. This Goldilocks philosophy (not too little or too much but just enough) doesn’t always go down well because it’s very intangible and changeable. Software developers intrinsically like to complete lists and an unchecked box on a Methodology check list feels uncomfortable. The trouble is software development is not a science that can be reduced to check lists but rather a highly creative art form. But that’s for another post…
Now, where was I?
Ah yes, an automated test system. Invaluable… but our system maybe not quite what you’d imagine when talking about automated testing. We don’t have test plans for every piece of code. We don’t write test cases before writing code. So what do we have?
Instead we maintain a library of search criteria that’s applied against a library of test data. Search tests exist for all the common cases but equally importantly for the ‘edge’ cases which 90% of our customers will never hit. Whenever we’re adding a new feature or find a bug we add a new test to the library to test for that feature/bug in the future.
When the test process is run for the first time a ‘master’ result is produced and a developer checks that the master result is correct. From then on any future runs are compared against the master. The results are summarised in a single file and we have an in-house utility to diagnose any discrepancies.
It works a treat and requires very little resources to maintain but most importantly it tests the essence of the product. The product as the customer experiences it. It’s invaluable in our pursuit of product excellence and reliability.
Okay, I admit that not every application has such convenient and concrete input/output requirements. However, if you can find an unobtrusive way to add even the smallest amount of useful automated testing to your development cycle I guarantee you’ll wonder why you never had something like this before. (Just remember to focus on the goal and not the process.)