I just stumbled over a post by Wes Dyer on software testing. The post is a very interested read.
While I share a lot of the concerns that he mentions and also have seen a few of them materialize in practice, I still get the sense that something is not quite right in Wes' post.
Reading a book on a subject that heavily depends on practical experience doesn't really give you the full experience. I'm sure he'd agree with this.
Overall the post comes across as a mostly theoretical discussion with little practical background, at least on the commercial scale or long-term application of TDD. This surprises me a bit since Wes - at least in 2005 - was a developer on Microsoft's C# compiler team.
I would love to know more about the background and context, e.g. empirical data, practical experience from commercial projects, etc.
To make a specific point: He mentions that the testing ideal is to minimize the cost of bugs. Well, that is certainly a good objective. For TDD, however, there are additional aspects that are important, e.g. trying to find a simpler implementation of the code through refactoring that becomes only available because of the comprehensive test suite that TDD creates in the first place.
I also think that the first diagram in Wes' post is not quite accurate. For instance while refactoring you also run tests to see whether or not your refactoring broke any tests. So you'd go from step 5 to step 2 or 4.
Looking at TDD in isolation doesn't make the cut either in my experience. TDD makes most sense and provides most value if it is one element in a system of interdependent and interrelated elements that comprise an agile development approach. So for instance there are interdependencies to refactoring, pair programming, and others. The techniques of XP are not just a laundry list of best practices. They support and strengthen each other.
I have been using XP (including TDD) in various projects of different sizes since 1999 when I was introduced to TDD/XP by Kent Beck. I am currently managing a 40+ people commercial software product project for an international audience. One of the key elements is TDD. The results that my teams have produced in this time have been by far superior to anything I have seen developed with a more "traditional" approach (this is certainly limited to the projects I have sufficient information about).
Bottom line: While I like Wes' post very much since it highlights a number of good points and concerns, at the same time it seems to lack quite some credibility because little empirical information is provided that support at least some of his statements. The post reads to quite some degree more like a theoretical opinion lacking sufficient practical background. Again, surprising given his (past) role at Microsoft.
One of my past managers liked to put it this way: Without numbers you are just a person with another opinion.
But, hey, maybe that's exactly what his post is: An opinion. And in that sense: Yes, I like it.
Saturday, May 03, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment