23.8 million hits represents a large number. Apparently there is no shortage of information and tools for improving the software quality. Why is it that we still have significant bugs in many software products (recent examples: here, here, and here)? Why are still many customers dissatisfied with the quality of software (indicators are mentioned here, here, and here)? Well, sometimes customeres might be satisfied because they have such low expectations which in turn were cause by poor quality in the past.
So, how to address this problem? I don't know the complete solution. I do know, though, that the longest journey starts with the first step. And here are two suggestions for what those first couple of steps might be for your organization. Introduce the following two rules:
- Continuous automated build. Make sure your software system, component, etc. is built multiple times a day. Automatically. With feedback to all developers working on the code base. And if the build is doesn't pass (it's broken) then lock down the source control system until the build passes again. This might be quite disruptive at the beginning. But imagine what behavioral changes this rule might cause? For one you will notice that your people will start to be much more adament about the quality of their change set. Who wants to be the reason for a broken build? And then it also triggers off a continuous thinking about how to improve the toolset, the code, the tests, etc. so that it becomes easier for every engineer to produce quality code.
- One automated per bug. Let's assume you are not happy with the number of bugs in your software. What if for every single bug your engineers fix (at least) one automated test has to be added to the automated test suite? What if that automated test reproduces the bug and when the bug is fixed the test passes?
This rule makes most sense if each automated test is added to a test suite that is run as part of the automated build (see rule 1).
With the above rules you start building a comprehensive set of automated tests. Some may say, we have always been doing that. That might be correct, but my experience tells me that some organizations simply run the suite of tests only when they test the entire system, after the "development phase" is complete and just before the new version is shipped.
Also in some cases people claim that rule 2 cannot be used with legacy systems because writing such tests is too expensive. Again, that might be correct. If that is the case it would be an additional indicator for the (lack of) design quality. The system is too hard to test. Enforcing rule 2 will help to enforce a refactoring or redesigning of the system towards better testability. Also, lacking organizational discipline (or time) - bugs are simply fixed without writing an automated test and then simply shipped after a quick manual assessment. This is far from what is ideal.
By adding one automated test at a time to your automated build - now including an automated test portion - your system will increase in quality over time. A single test won't make a difference. But as your test suite increases in size it will cover more and more of your system.
And here is another observation: When a bug is reported which areas do they tend to be in? Typically you'll find them in frequently used areas or in areas that are particularly flaky. By adding automated tests in these arease you actually target those areas where you get the biggest bang for the buck.
Note that the two rules can be used for both new development and legacy code bases. There is no excuse for not even trying to improve the quality. It doesn't require myriads of expensive and complex tools. Simple rules like the above can help improving organizational behavior towards better (software) quality.
No comments:
Post a Comment