Saturday, May 24, 2008

Hiring

Hiring people is important to the success of your team and project. I strongly believe that you cannot deliver great products if you have a low-performing team. I strongly believe that regardless what you want to achieve with your team, it heavily depends on the quality of the people. I'd like to discuss a few of the principles I use to select people for joining my team.

Elite Team

It is almost like a positive, reinforcing cycle. If you have an excellent team, people will learn about it, and then they want to join that team. This leads to a greater pool of people you can select from, which in turn provides you with high quality candidates you can select from.

So in that sense an elite team, thinking, or attitude is acceptable and even desirable I think. The challenge is certainly to avoid coming anywhere near arrogance. But if you are pushing your team all the time to become even better - to outperform themselves, and if you have hired people with good character and attitude than this is a no-brainer anyways.

Hiring Process

It is important to distinguish between trying to assess the quality of a candidate in general, and assessing how the candidate fits with your team. I think you should focus very much on whether a person fits your team. If a candidate is a perfect engineer but isn't a fit for your team, hiring that person will not do any good. Similarly rejecting a candidate doesn't mean that the person is a bad engineer. It merely says that the person wasn't a good fit for your team.

If given the choice of a) a candidate that is perfect as an engineer but has weaknesses on the interpersonal side, and b) a candidate that is a perfect fit regarding interpersonal skills but has some weaknesses on the technical side, I would always go for b).

Part of my hiring process is a programming task. This is a very simple task with about half a dozen stories. Good solutions typically fit on two letter/A4 sized pages including tests. It is amazing how good that filter works. I have seen "senior" engineers claiming to be experts in Java, C#, and C++. But when asked about the differences of generics (Java, C# - different in both cases) and templates in C++, I looked into a face that spoke a clear language: "I have no clue."

Sure that was an extreme case. But still. If a candidate cannot even do a very simple programming task it won't be much better if the task is part of a regular project.

Team Buy In

I believe this is an important aspect as well. When you hire a person always keep in mind who you expect the new person to collaborate with most of the time.

The technique I use is to basically ask two of my team members to run the actual interviews. At the end of the hiring process those two people then can make a recommendation who they believe is the best fit.

This does not take away your responsibility to make the decision. But it allows your team members to be heard and to voice their opinion about the candidate.

Once your team members have voiced their opinion and assuming you follow their advice, you will find that the buy-in of your existing team members with regards to a hiring decision is substantially higher. At the same time you ensure that you don't overlook something. When selecting new development workstations I ask my senior engineers to look at the specs as well. Why would I want to reduce the quality for the hiring process?

Hiring Managers

Admittedly I don't have a lot of experience in this field. Still I believe that the above principles apply to hiring managers as well. Even if you seek advice from your team the final decision is still with you. After all a commercial enterprise is not a democracy.

By not asking your team members for their thoughts about different candidates you basically reduce the buy-in, and potentially you might even overlook an important detail. You are missing an opportunity for building an even stronger team.

Certainly, your department is your "ship". You can run it any way you like. But I believe that with flat hierachies and with self-organizing teams, and given that we are now in the 21st century, and people are more self-directed and teams are more self-organized, it definitely is a good practice and pays off when you involve your team when hiring, including hiring managers. I don't see why the principles that work for engineers shouldn't work for managers as well.

Just my two cents. As I said I don't have a lot of experience so it might well be that I'm overlooking something here.

Saturday, May 10, 2008

"Healthy Noise": Introducing Automated Performance Testing

Starting with the current release cycle I have introduced automated performance testing for my project team. Not that we didn't test performance in the past, not that we didn't assess performance related items during a release cycle. But the fact that it is now becoming part of the automated development environment creates a number of interesting collateral effects. I'd like to highlight a few.

Reduced Latency

First of all there is the fact of performance testing itself. I strongly recommend not to wait until you are in the last few weeks or days before a release. That may be too late. If you discover a performance related issue then you may be forced to take shortcuts so your product meets performance requirements. And indeed in the past we already did assess the performance of the product throughout a release cycle.

What is different then? In the past we had to get performance engineers to set up the test, maintain test scripts, run the tests, analyze the results, identify root causes, and suggest solutions. Now the tests are written by developers and they are integrated into the test suite and then executed immediately after integration. No need to wait until a time slot with a performance engineer becomes available. The tests are now written and maintained by the developers. The performance engineer's time is freed up and in that time they are available to consult to the developers. The feedback loop is much shorter. A few hours instead of a few days or weeks. And this is also helps reducing risk since stories that may have a performance impact can be played earlier.

General Benefits of Automation

And you also get the obvious benefits of automation such as repeatability, lower costs, higher quality. Not that the performance engineers make mistakes consciously. As humans we are not fail safe whether we like it or not. So the major drivers for the automation were: cost, time, quality.

Impact on Behavior

But then there are less obvious effects caused be the introduction of automated performance testing. For instance I am observing that the thinking of the entire team is influenced by it. The performance aspect has been promoted and is not playing a much more important role in the considerations of the cross functional teams. Performance is built in instead of bolted on. Should a particular design or implementation cause a performance issue it can be dealt with immediately. Bad practices are no longer proliferated.

With our customers I am also observing an improved understanding of considering non-functional aspects such as performance engineering when deciding about backlog priorities.

And although it is not the main driver, and I didn't think of this aspect at all, there is also the impact it has on the developer. I sense that improving the development environment with automated testing has a positive impact on the morale of the engineering team. We continue to improving the development environment providing all people an opportunity to improve velocity while improving quality at the same time.

"Healthy Noise"

Certainly this change wasn't painless. The development teams had to negotiate with their customers the right amount of performance related work that needed to be accommodated in the backlogs. The automated build environment has to be extended.

We may have to purchase additional licenses for the performance toolset. I hoped I could get away with just the floating licenses we already had but that doesn't seem to pan out! However, now the bottle neck is no longer the performance engineer. Now it looks more like the number of licenses. When I compare the "price" of an engineer with the price of an additional license, it becomes apparent that the license is definitely cheaper.

Introducing automated performance testing caused some issues. But I would call this "healthy noise". All participants - customers, user experience experts, performance engineers, developers - are working very focused to iron out these hick-ups and they have made a lot of progress.

Wrapping Up

Introducing a somewhat significant change like this requires adaptation by everybody. Processes, tools, etc. need to be adapted as well. The result, however, is that you are moving closer to a holistic approach to software engineering that also considers performance engineering and testing as an integral part of the process. In particular I am very pleased how all the people involved grew as part of this process. The team and the product will be better because of this. Well done, folks!

Saturday, May 03, 2008

Re: Musings on Software Testing

I just stumbled over a post by Wes Dyer on software testing. The post is a very interested read.

While I share a lot of the concerns that he mentions and also have seen a few of them materialize in practice, I still get the sense that something is not quite right in Wes' post.

Reading a book on a subject that heavily depends on practical experience doesn't really give you the full experience. I'm sure he'd agree with this.

Overall the post comes across as a mostly theoretical discussion with little practical background, at least on the commercial scale or long-term application of TDD. This surprises me a bit since Wes - at least in 2005 - was a developer on Microsoft's C# compiler team.

I would love to know more about the background and context, e.g. empirical data, practical experience from commercial projects, etc.

To make a specific point: He mentions that the testing ideal is to minimize the cost of bugs. Well, that is certainly a good objective. For TDD, however, there are additional aspects that are important, e.g. trying to find a simpler implementation of the code through refactoring that becomes only available because of the comprehensive test suite that TDD creates in the first place.

I also think that the first diagram in Wes' post is not quite accurate. For instance while refactoring you also run tests to see whether or not your refactoring broke any tests. So you'd go from step 5 to step 2 or 4.

Looking at TDD in isolation doesn't make the cut either in my experience. TDD makes most sense and provides most value if it is one element in a system of interdependent and interrelated elements that comprise an agile development approach. So for instance there are interdependencies to refactoring, pair programming, and others. The techniques of XP are not just a laundry list of best practices. They support and strengthen each other.

I have been using XP (including TDD) in various projects of different sizes since 1999 when I was introduced to TDD/XP by Kent Beck. I am currently managing a 40+ people commercial software product project for an international audience. One of the key elements is TDD. The results that my teams have produced in this time have been by far superior to anything I have seen developed with a more "traditional" approach (this is certainly limited to the projects I have sufficient information about).

Bottom line: While I like Wes' post very much since it highlights a number of good points and concerns, at the same time it seems to lack quite some credibility because little empirical information is provided that support at least some of his statements. The post reads to quite some degree more like a theoretical opinion lacking sufficient practical background. Again, surprising given his (past) role at Microsoft.

One of my past managers liked to put it this way: Without numbers you are just a person with another opinion.

But, hey, maybe that's exactly what his post is: An opinion. And in that sense: Yes, I like it.
Hostgator promo code