Monday, December 22, 2008
A Thought About Selecting Tools
Wrong! Check what the primary reason is why your company is in business. Is it being an XYZ partner? Or is it delivering good products and services to your customers? I bet that it is the latter.
And if you deliver good products and services to your customers using XYZ technologies or tools might be the right choice. But maybe it's not. If you believe you can choose only from XYZ's product list then chances are that you miss out on the best opportunities to improve what you do and how you work.
For example, XYZ may just have large monolithic applications that require training for users and specialists for configuring and maintaining it. And maybe those large applications that try to be everything to everyone become so flexible that all that flexibility makes it hard to change and quickly adapt your processes to your environment. Do you want your tools to support your processes? Or do you want your tools to dictate your processes?
Bottom line: Don't let you limit yourself by being an XYZ partner thinking that you can use only their tools!
Saturday, December 20, 2008
Getting Started
Sometimes it looks to me that the most difficult part is to make sure that the team starts to shift its mindset. This does not mean at all that everything that the team did in the past or everything they are doing today is wrong. Quite the opposite.
A mindshift helps looking on everything in a new light. For instance it might help to rerun an experiment that failed a few months ago because one of the tools wasn't up to the job. If you have a new version of the tool this time the experiment might have a different outcome.
A mindshift also helps you to move to a different approach while - maybe - continue operating the same. For instance when you are used to larger and more complex projects or tools or designs then it can be quite challenging to move in the other direction. What about smaller and simpler?
A mindshift might also require challenging some of your assumptions. What if you are assuming that you get punished if an experiment fails? Maybe that assumption was correct many years ago and you internalized it so much that you are not even aware of it. What if the assumption is incorrect?
Getting started can be very difficult and sometimes it takes a lot of courage. Yes, it is good to know all those reasons why something cannot be done. In engineer's terminology this is the list of rists. But then: aren't we engineers to to make things work despite other people saying it cannot be done?
So try small experiments, maybe a couple of people for a couple of days. Maybe the experiment fails. No problem. Then you know yet another way that doesn't work and you have learned more about the challenge.
But maybe the experiment is successful. Then you have started to move. Admittedly just a tiny little step but you have moved. And you have learned as well. And maybe after you have moved you can identify the next thing already that is worth trying which was hidden or impossible before your move.
Let's look at a real life example to illustrate my point: Let's assume you have that mixture of unmanaged C++ and .NET code and you want to move all of your code to the .NET world. Then switch on that /clr flag and see what happens. Maybe it is successful and maybe then the code for communicating between the two worlds can go away, and maybe then you start seeing new options for what the next best move can be.
Monday, July 28, 2008
Product Manager as Team Members
So you may also need subject matter experts (SME's) for different areas. Maybe your application is extremely sensitive to security. Then you may want to include a security expert. Or your system might need to process a lot of data in which case you may want to include a performance expert. Or your system needs to interact with that old mainframe application in which case you want to include a person who is sufficiently familiar with that. In all these cases you may want to add the person full time or part time on an as-needed basis.
One of the most important people on such a team is the subject matter expert (SME) for the business side of the application. E.g. for a hotel reservation system you would want to have a person on the team who is familiar with that industry.
In some companies that domain expert is equal to the product manager. OK, I'm simplifying here a bit. But stay with me as the simplification is for illustration purposes only.
So with the above mentioned concept of the "whole team" you'd assume that this product manager is part of your team as well.
Well, just until a few weeks ago I would have said yes.
In the meantime I have come to the conclusion that I have to qualify this answer. And here is the reason.
To some degree, the product manager is part of the team in that she provides the input that is required from a product perspective for example the business side of the software.
But then, the product manager is also a customer. Do we treat customers the same way as we would treat our fellow colleagues?
Sure, it definitely would be nice and desirable if the relation would be just the same. We could have Friday afternoon drinks and have all the fun by telling all the war stories from the week. But wait! Is this really what you want?
Here is something to think about: Your customer is the one with the money (or budget). So here is something that is different. Your customer wants to spend money only on items that make sense to her. Your customer doesn't want to hear about that database problem that you encountered this week because it might actually mean - in her perspective - that the project is at risk, and all the sudden this anecdote, shared over a beer on Friday, takes on a life on its own.
Does this indicate a failure of the process? Does this mean you should exclude your customer from the team? I think that would go too far.
I think, based on my experience over the last 12 months, the customer (e.g. your product manager) is probably the most valuable person on the team. That means that person needs special treatment. Does that mean you should say "yes" to everything? No, it doesn't. Does it mean that the customer gets her way all the time? No, it doesn't.
So what does it mean? It means that you can still share most of the information with your product manager that you would share with your other team members as well. But it might mean that you rethink the way how you represent things.
Depending how you communicate with the product manager you craft her perception of you and your team. Say the same things, but say them in such a way that considers how you may influence perception. Don't walk on egg shells either. You still need to be self-confident, and that means for each conversation it is good if you have prepared a view. Don't go into a meeting without a view.
I have changed my model for how I look at internal customers in that I try to provide to them the same service I would provide to external customers. Although there is no written contract, a product manager (or any other business domain expert) is a customer to you, and the way you treat your customer will have a huge impact on the outcome of your projects. Work for your customer, work with your customer, come with solutions instead of problems, and ultimately make your customer happy.
Thursday, July 17, 2008
Can Agile Keep You From Being Successful?
Some authors - I don't want to give reference here since I was looking at German authors - use a model that takes different thinking styles as a starting point. The knowledge focused thinker tries to become and stay an expert on a particular subject. And a person that is an expert may even have the fear that someone comes along who is an even better expert. So they frantically work on become and staying the "best". They may start to defend their beliefs and knowledge, ultimately coming across as defensive, academic, or even arrogant. Non of these perception will help if you want to become an agile leader.
As a leader you are still a subject matter expert. However, instead of being an expert on Crystal, Scrum, XP, or the like, you become an expert on agile leadership. You can delegate the agile practices to people in your team. After all: If you have coached your team over an extended period of time there should be more than enough methodology champions in your team anyways.
So you can let go without losing influence. You are no more under the pressure to create the perfect implementation of Scrum or XP regardless of what "perfect implementation" means. If XP or Scrum doesn't work perfectly it is not you who has failed. The team as such has not yet managed to adopt and adapt it sufficiently. Depersonalize the particular area from yourself.
A question that might help you to make this change towards re-inventing yourself could be: How do you define success? Is it to get pair programming rolled out through-out the organization? Or is it something different? Maybe you want to provide the best possible service to you customers. Maybe you no longer think in terms of black and white, right or wrong, works or doesn't work. Maybe you want to introduce a third category saying: it helps.
Bottom line: To make the shift towards an agile leader you might actually have to let go of trying to be an expert on an agile methodology to be successful!
Friday, June 27, 2008
Do You Need a "Quality Program"?
It depends on what you mean by program. If you mean an elaborate and detailed plan it will be highly likely that by the time you get to start executing it the conditions have changed and many of the details are no longer valid in the changed context.
And here is a related question: Is Quality Improvement a one-off? Again, it depends. If it means that once you have rolled out the quality improvement - you ticked off all the items on the list - you are done then in all likelihood your organization will fall back in terms of quality again even if it doesn't degrade.
But if you understand Quality Improvement as a process, if you construct it as a set of guiding principles and values, then you are on the path towards a sustainable approach.
And here is how it might look in practice.
Option 1 might be: Create a detailed program so that you cover all aspects of what is going wrong. This may take weeks or even months just to get this plan set up. And it will take some more time to roll it out. And then it will take even some more time to show results.
Option 2 could be: Create a laundry list of things that may have an immediate impact. Your people know what's broken. Poll their views. Then make small, incremental changes. Observe. If it works, do more of it. All of this can be instigated, modified, or cancelled within days or weeks. Yes, you might be wrong. But with small items all of them suggestions from your people who are as close to the problems as possible you will get the majority right. And for the few that don't work, you can react extremely fast and can cancel it.
I personally would go for option 2 since it would help showing results very fast and very early. Each small item that is cleaned up will uncover or emphasize other items that are broken. And this could be a bug that you fix, or it could be a small change in the process that you apply. With small incremental steps you achieve short term results, you don't have to flush down major changes that go wrong, and you plant the seeds for a continuous improvement and learning process.
Let me give a very concrete example. And this one amazes me again and again because it is so blatantly obvious that it is surprising that there are still companies out there who don't do this. Let's assume you have a system of a significant size. This can be an in-house solution, a one-off, or a commercial off-the-shelf (COTS) product. Assume further that you have an issue with the bug level. There are just too many of them. Then here are the rules that you can put in place to address the problem, fix it, and prevent it from reappearing:
- Each bug must be reproduced by an automated test.
- The automated test is to be included in the automated (regression) test suite
- The code is modified until all tests pass.
- A new version, e.g. a patch, can be sent out only if all tests pass.
- If the bug is fixed on a support branch it also has to be fixed on the main development stream (potentially other streams as well but I think that's a business decision)
Bugs are not the only type of lack of quality. You can equally find lack of quality around processes, requirements, tools, etc. The approach would still be the same. Instead of a sledgehammer or Ben Hur sized quality program, try a continuous stream of small incremental changes. It's slow at the beginning since there is so much to clean up. But then it will gain momentum, and when it has become a habit your organization has changed it's behavior. Quality has been established as a process, as a part of the culture.
So, no, I don't think you need an elaborate program. Small, incremental changes, observation, then adapt, are extremely lightweight and can be introduced today. But you need certainly guiding principles that help your team understand that you mean business when you talk quality.
Thursday, June 26, 2008
Quality!
Just calling for it doesn't do the job. You've got to put your money where your mouth is!
So when you ask for improving the quality it might pay also off to think about W. Edwards Deming's "Red Beads Experiment" (description for example here). Why is this relevant?
In essence the experiment describes how you limit the quality any process can achieve by not allowing the employees to improve the tools and processes to do their work. In fact allowing for continuously improving tools and processes is one of the most powerful mechanisms to improve quality. And as Deming points out, the decision to empower or prevent employees from doing this improvement comes from management. So, if you ask for quality improvements and at the same time deny your team tool or process improvements you shouldn't be surprised if the quality level doesn't go up at all or not as quickly as you'd like to. |
Wednesday, June 25, 2008
Zero Defects
For instance look at Test-Driven Development (TDD). It is a technique that among other things uses prevention instead of inspection to achieve high quality code. And in all cases the objective must be zero known defect. It is not about "good enough" quality. What is "good enough" in this case? Is it 2 bugs, or 5, or 10? It doesn't really matter. "Good enough" is not specific enough when it comes to quality. Zero defects is the objective. That's specific. Is it achievable? Well, apparently sites like Flickr release new versions of their system up to every 30 minutes. Is this doable if you have low quality? Very unlikely. Without automated comprehensive, fast (= cheap) testing Flickr would be able to do this.
Another technique that is popular with agile approaches are reflection workshops. These are basically opportunities for taking a step back and think about improving the way we work. And if (major) issues are identified in the process then the solution is not only to fix it but to prevent it.
All of these thoughts are not new. Philipp B. Crosby coined the term Zero Defects several decades ago. His book with the title "Quality is Free" is an excellent read. I just finished the sequel "Quality Without Tears" which was published in 1984, long before "Agile" became a hype. When you read the book you will notice that although some terms are different there are a huge amount of commonalities.
Get Quality Without Tears: The Art of Hassle-Free Management
Monday, June 16, 2008
Recognition: How do you share Fame and Blame?
In some cultures not complaining is equivalent to praising someones performance. The Southwest of Germany is such a place (For those who are familiar with it: "Net g'motzt isch scho g'nug g'lobt").
In other cultures rewarding has take on almost the shape of an avalanche. New Zealand is such a place (Remember all the awards, prices, etc. that you got at school? In some schools there are more awards than students!).
In other cultures appraising each other can become a group decease. The United States appear to me sometimes like that (Ever heard of the group cheering at Wal-Mart?).
So regardless where you live there are cultural differences in terms of how recognition is used to motivate people. And as a leader you can make the difference by sharing the fame with your team, and keeping the blame away from your team (unless they really screwed up in which case you'll probably have a private session with them that you don't want to share outside of your team).
So is recognition about affection? I think these are two different things. People certainly want to be liked. And if they can choose they will work in a place where they and their work is appreciated.
Recognition, in my view, is not about expression affection. Expecting being recognized for a good performance or result is not seeking to be liked. I think people can even "survive" for a while if they are not given recognition explicitly. If that's your style, so be it. Maybe you express your recognition in other ways, e.g. by giving your people a day off, or by giving them stock options, or by asking them taking on tasks that are critical and important for the company and come with more responsibilities.
There are many different ways to express recognition without having to express affection. Some of them are even free. Or how expensive is it to say "Thank you"? How expensive is it to mention people who over years showed persistent high performance despite all obstacles, hung on to the project and the team, and in the end made it happen?
All of this depends a lot of your style. Expressing recognition is important to keep your team motivated. It is certainly not the only means and it certainly is not sufficient. And if you use low-cost tools like saying "Thank you!" you must ensure that you really mean it. If you don't have the credibility of your team, just don't.
An entirely different question is if you give recognition, e.g. by company wide announcement, but don't include the people who had critical roles for the success. If you include only the ones that you interact with most frequently, or if you reduce other people's contribution, don't be surprised if people won't be as motivated next time round. Again this has nothing to do with the need to be liked. It's about ensuring that recognition goes to the right people. And it's not about what you intended to say. It is about what actually did say.
So the questions are: How do you share fame and blame? How do you give recognition to your team and the people on your team?
Friday, June 06, 2008
Update on Bureaucracy and Constraints
Indeed I was able to find an option to reduce some overhead in my area. By using shared files instead of duplicated information I was able to reduce some bureaucratic overhead with reporting.
And here is another example: The PMO (Project Mangement Office) in our organization has decided to define a new charter for themselves in order to clarify their role and responsibilities. We started with a draft that had about 6 or 8 pages. We ended up with a net amount of about 1 page when we finished refactoring. That way we reduced waste.
And the biggest fun was from my perspective that the entire group participated and contributed to simplifying the document. It's now really lean and mean!
I'm sure we are able to do the same thing for other items again. We have the opportunity to become a really lean organization. It seems important to me to alway be on the lookout for opportunities to reduce waste, to try to find simpler solutions, to remove barriers.
If you build too many fences you will be surrounded by sheep. I think that in the age of internet communities like Facebook or Twitter people are looking for flatter hierarchies, more self-organization, and more freedoms. Everything else equal, people will choose to work for a company that comes close to the ideologies that is underlying the new online communities.
Managers that continue to use a management approach that might have worked 10 years ago will eventually loose out. Instead of administrative management, we need inspiring leadership that unleashes the creativity of the people in a learning organization. And in my view one important ingredient is collaborative decision making.
Thursday, June 05, 2008
Constraints and Bureaucracy: Do you slow down your organization?
Agreed.
But then think of this: Your team interacts with lots of different other teams and individuals in order to work on projects. In addition there are cross cutting concerns such as administrative things (e.g. access to the building), HR (e.g. papers you need to hand in), Accounting (e.g. your last expense report), and so forth.
From each department's perspective they impose only a small little item (or two or three). And each item is just a very small thing by itself. And from that departments perspective each of the items is required and reasonable so they can do an excellent job.
The problems start when you pile the all the items up that come from different departments. Then progress in your organization may slow down significantly and may even come to a grinding halt.
So what to do? I have friends who have simply given up on some of these items. If they are given a spreadsheet to fill in they may decide to just throw in some (almost) random data (except if it is financial data). (Sorry, but I won't reveal names here to protect the individuals.)
Let's take an example: If asked to assess your team members and fill in 50 or 60 items in a skill matrix and given a team size of 15 or 20, we are talking about between 225 and 1,200 items to fill in. You probably need to think about each item for at least 10 seconds (this is a wild guess). But maybe you want to do a good job and do justice to the people in your team. Or maybe you want a realistic and true picture so you can be better at managing your project. So you spend 30 seconds on each item (Too much? Too little?). Then just this "simple" exercise amounts to 2 to 10 solid hours of your working time spent on a single spread sheet! Doesn't sound like much? Well, don't forget those other spreadsheets, reports, presentations, etc. that are waiting in your inbox!
So whoever thinks they are at the low end of bureaucracy, please think again! From a person's perspective who is affected by your actions the world may look entirely different, and your "small" request may amount to just that tipping thing that may keep those people from being successful. You may achieve quite the opposite of what you really wanted.
My recommendation would therefore be to consider for all your actions: How could you do less and still get a good enough outcome? (Or Google this: TSTTCPW. It applies to non-software engineering items, too.)
BTW: I will do the same myself and see whether I can't find an item where I might be causing avoidable overhead myself. I'm probably as guilty of this as anyone else....
Saturday, May 24, 2008
Hiring
Elite Team
It is almost like a positive, reinforcing cycle. If you have an excellent team, people will learn about it, and then they want to join that team. This leads to a greater pool of people you can select from, which in turn provides you with high quality candidates you can select from.
So in that sense an elite team, thinking, or attitude is acceptable and even desirable I think. The challenge is certainly to avoid coming anywhere near arrogance. But if you are pushing your team all the time to become even better - to outperform themselves, and if you have hired people with good character and attitude than this is a no-brainer anyways.
Hiring Process
It is important to distinguish between trying to assess the quality of a candidate in general, and assessing how the candidate fits with your team. I think you should focus very much on whether a person fits your team. If a candidate is a perfect engineer but isn't a fit for your team, hiring that person will not do any good. Similarly rejecting a candidate doesn't mean that the person is a bad engineer. It merely says that the person wasn't a good fit for your team.
If given the choice of a) a candidate that is perfect as an engineer but has weaknesses on the interpersonal side, and b) a candidate that is a perfect fit regarding interpersonal skills but has some weaknesses on the technical side, I would always go for b).
Part of my hiring process is a programming task. This is a very simple task with about half a dozen stories. Good solutions typically fit on two letter/A4 sized pages including tests. It is amazing how good that filter works. I have seen "senior" engineers claiming to be experts in Java, C#, and C++. But when asked about the differences of generics (Java, C# - different in both cases) and templates in C++, I looked into a face that spoke a clear language: "I have no clue."
Sure that was an extreme case. But still. If a candidate cannot even do a very simple programming task it won't be much better if the task is part of a regular project.
Team Buy In
I believe this is an important aspect as well. When you hire a person always keep in mind who you expect the new person to collaborate with most of the time.
The technique I use is to basically ask two of my team members to run the actual interviews. At the end of the hiring process those two people then can make a recommendation who they believe is the best fit.
This does not take away your responsibility to make the decision. But it allows your team members to be heard and to voice their opinion about the candidate.
Once your team members have voiced their opinion and assuming you follow their advice, you will find that the buy-in of your existing team members with regards to a hiring decision is substantially higher. At the same time you ensure that you don't overlook something. When selecting new development workstations I ask my senior engineers to look at the specs as well. Why would I want to reduce the quality for the hiring process?
Hiring Managers
Admittedly I don't have a lot of experience in this field. Still I believe that the above principles apply to hiring managers as well. Even if you seek advice from your team the final decision is still with you. After all a commercial enterprise is not a democracy.
By not asking your team members for their thoughts about different candidates you basically reduce the buy-in, and potentially you might even overlook an important detail. You are missing an opportunity for building an even stronger team.
Certainly, your department is your "ship". You can run it any way you like. But I believe that with flat hierachies and with self-organizing teams, and given that we are now in the 21st century, and people are more self-directed and teams are more self-organized, it definitely is a good practice and pays off when you involve your team when hiring, including hiring managers. I don't see why the principles that work for engineers shouldn't work for managers as well.
Just my two cents. As I said I don't have a lot of experience so it might well be that I'm overlooking something here.
Saturday, May 10, 2008
"Healthy Noise": Introducing Automated Performance Testing
Reduced Latency
First of all there is the fact of performance testing itself. I strongly recommend not to wait until you are in the last few weeks or days before a release. That may be too late. If you discover a performance related issue then you may be forced to take shortcuts so your product meets performance requirements. And indeed in the past we already did assess the performance of the product throughout a release cycle.
What is different then? In the past we had to get performance engineers to set up the test, maintain test scripts, run the tests, analyze the results, identify root causes, and suggest solutions. Now the tests are written by developers and they are integrated into the test suite and then executed immediately after integration. No need to wait until a time slot with a performance engineer becomes available. The tests are now written and maintained by the developers. The performance engineer's time is freed up and in that time they are available to consult to the developers. The feedback loop is much shorter. A few hours instead of a few days or weeks. And this is also helps reducing risk since stories that may have a performance impact can be played earlier.
General Benefits of Automation
And you also get the obvious benefits of automation such as repeatability, lower costs, higher quality. Not that the performance engineers make mistakes consciously. As humans we are not fail safe whether we like it or not. So the major drivers for the automation were: cost, time, quality.
Impact on Behavior
But then there are less obvious effects caused be the introduction of automated performance testing. For instance I am observing that the thinking of the entire team is influenced by it. The performance aspect has been promoted and is not playing a much more important role in the considerations of the cross functional teams. Performance is built in instead of bolted on. Should a particular design or implementation cause a performance issue it can be dealt with immediately. Bad practices are no longer proliferated.
With our customers I am also observing an improved understanding of considering non-functional aspects such as performance engineering when deciding about backlog priorities.
And although it is not the main driver, and I didn't think of this aspect at all, there is also the impact it has on the developer. I sense that improving the development environment with automated testing has a positive impact on the morale of the engineering team. We continue to improving the development environment providing all people an opportunity to improve velocity while improving quality at the same time.
"Healthy Noise"
Certainly this change wasn't painless. The development teams had to negotiate with their customers the right amount of performance related work that needed to be accommodated in the backlogs. The automated build environment has to be extended.
We may have to purchase additional licenses for the performance toolset. I hoped I could get away with just the floating licenses we already had but that doesn't seem to pan out! However, now the bottle neck is no longer the performance engineer. Now it looks more like the number of licenses. When I compare the "price" of an engineer with the price of an additional license, it becomes apparent that the license is definitely cheaper.
Introducing automated performance testing caused some issues. But I would call this "healthy noise". All participants - customers, user experience experts, performance engineers, developers - are working very focused to iron out these hick-ups and they have made a lot of progress.
Wrapping Up
Introducing a somewhat significant change like this requires adaptation by everybody. Processes, tools, etc. need to be adapted as well. The result, however, is that you are moving closer to a holistic approach to software engineering that also considers performance engineering and testing as an integral part of the process. In particular I am very pleased how all the people involved grew as part of this process. The team and the product will be better because of this. Well done, folks!
Saturday, May 03, 2008
Re: Musings on Software Testing
While I share a lot of the concerns that he mentions and also have seen a few of them materialize in practice, I still get the sense that something is not quite right in Wes' post.
Reading a book on a subject that heavily depends on practical experience doesn't really give you the full experience. I'm sure he'd agree with this.
Overall the post comes across as a mostly theoretical discussion with little practical background, at least on the commercial scale or long-term application of TDD. This surprises me a bit since Wes - at least in 2005 - was a developer on Microsoft's C# compiler team.
I would love to know more about the background and context, e.g. empirical data, practical experience from commercial projects, etc.
To make a specific point: He mentions that the testing ideal is to minimize the cost of bugs. Well, that is certainly a good objective. For TDD, however, there are additional aspects that are important, e.g. trying to find a simpler implementation of the code through refactoring that becomes only available because of the comprehensive test suite that TDD creates in the first place.
I also think that the first diagram in Wes' post is not quite accurate. For instance while refactoring you also run tests to see whether or not your refactoring broke any tests. So you'd go from step 5 to step 2 or 4.
Looking at TDD in isolation doesn't make the cut either in my experience. TDD makes most sense and provides most value if it is one element in a system of interdependent and interrelated elements that comprise an agile development approach. So for instance there are interdependencies to refactoring, pair programming, and others. The techniques of XP are not just a laundry list of best practices. They support and strengthen each other.
I have been using XP (including TDD) in various projects of different sizes since 1999 when I was introduced to TDD/XP by Kent Beck. I am currently managing a 40+ people commercial software product project for an international audience. One of the key elements is TDD. The results that my teams have produced in this time have been by far superior to anything I have seen developed with a more "traditional" approach (this is certainly limited to the projects I have sufficient information about).
Bottom line: While I like Wes' post very much since it highlights a number of good points and concerns, at the same time it seems to lack quite some credibility because little empirical information is provided that support at least some of his statements. The post reads to quite some degree more like a theoretical opinion lacking sufficient practical background. Again, surprising given his (past) role at Microsoft.
One of my past managers liked to put it this way: Without numbers you are just a person with another opinion.
But, hey, maybe that's exactly what his post is: An opinion. And in that sense: Yes, I like it.
Friday, April 25, 2008
Mandating Velocity
You are certainly interested in any improvement of speed. So you are considering to say: What if you would plan for 4 per week? Or maybe you are even considering to mandate a velocity of 4. Would that be a smart idea?
Let's have a look back to the "good old days". A manager describes a piece of work to one of his engineers and asks for the estimated effort required to do the job. The engineer comes back and says that he can build it in 28 days. The manager thinks about it and then says: I think you can build it in 25 days. You will get 25 days to build it. I expect it by ... (fill in a date).
What's wrong with this? For one, if the engineer is not stupid next time he will add what he thinks will removed from his estimate (3 days) plus some (maybe 2 days) in case the cuts are higher next time. So the initial estimate might be 32 days. The manager is not stupid either. Next time he might think that the estimates are "inflated" anyways and that "sand bagging" happens anyways and sure people are playing ping-pong or darts anyways. So it's safe to reduce the estimates by 10% or even more. Just to get to the "real" numbers.
All of this leads to a situation where nobody works with the real numbers anymore. Every single estimate becomes unreliable and as a consequence every project plan built on them is flacky and unreliable on the outset. The engineer and - using the approach on a large scale - the entire team is set up for failure. (Unless you don't have a problem with "whipping" them a little harder and expect them to work crazy hours, which may not be a problem for as maybe you don't have family and don't know how it feels when you miss important events in the lives of your kids.)
There are certainly chances that the engineer may finish the work in 25 instead of 28 days. But at what price? If it really takes that long to do the job properly and if the engineer is really working at the maximum sustainable pace then what is left? Quality. The only choice the engineer is left with is degrading quality. This could mean that code is no longer reviewed. It could mean that design is no longer reviewed. It could man that some error handling code is left out (good chances it won't get caught in system testing). It could mean that fewer tests are written or no tests at all.
How is this related to the initial story around a velocity of 3.56 versus 4? Well even if you don't challenge the estimates, mandating an increased velocity is in essence the same thing. You are saying that people are not working hard enough, are in cruise-mode, etc.
Eventually the velocity will increase. The numbers will go up, they will even achieve 5, 6, or any number that you choose. Just by osmosis the team will learn about your "technique" and it will adapt. The estimates for every single item will go up. If an item was estimated to be 2 it will then be 3 (or even 20!). Then the velocity will increase not only to 4 but they will achieve a breath taking 35! Just imagine!
But seriously. If you are interested in working with the true numbers so that you have realistic and reliable plans that you can base important decisions on then you shouldn't mandate velocity.
A different question is certainly when there is a project that is very critical to the company. If you have treated your people with openness, honesty, and fairness, I'm sure they will understand if as an exception extraordinary measures needs to be taken. However, if those are used in every other release cycle, it becomes normal mode of operation and hence unsustainable.
It all boils down to how you select your "supplier". Are you interested in getting only the cheapest one, no matter what? Or are you interested in a reliabel supplier that continuously reduces the cost / increases the output / improves the quality? -As a customer as well as a manager this is your choice. (My take on this: Sacrificing quality has never ever paid off.)
Wednesday, April 16, 2008
A reply from MindJet (makers of MindManager)
I already heard back from them. A person from their sales department provided me with the information the download - OK, everybody can do that - but also with details for an extended trial.
So here are the positive remarks:
- Fast response. It was less than 24 hours after my enquiry.
- Good enough response. I got the information that I was hoping for.
So, I guess, I'll give it a try, and then I'll take it from there.
Tuesday, April 15, 2008
Are your stakeholders on the same page?
This doesn't have to be the case as I just had to find out.
Over several months I used the assumption that the product specialist, who I have on my team as the proxy customers and backlog owners, were on the same page with their manager, the product manager. I think all of them thought, too, they were on the same page.
Only two weeks ago it turned out that the product manager had an entirely different understanding of the scope that would be available for an internal release than what at least one of the product specialists thought.
The result was a "sticker shock" for the product manager. With reason!
So what can we learn from this? There is certainly no guarantee that communication would have prevented this form happening. Remember all participants acted in good faith. Still more and more focused communication with your key stakeholders will reduce the likelihood of bad surprises.
So in my case I have arranged for weekly catch-ups with the particular product manager (Agile value: adapt). The intention is to keep each other informed as much as possible about developments that may have an impact on each others work.
Again, this highlights the superiority of direct communication (agile value!) over comprehensive documentation. Of course, regular written progress reports were provided during the same timeframe. And everybody thought everything is alright. Well, it wasn't. A key stakeholder wasn't on the same page.
Are your key stakeholders all on the same page?
More on MindManager
Well, at least this time they don't try to sell me on all those performance improvements.
Still, why should I pay for an upgrade if I don't know whether the performance issues are fixed?
They claim that 91% of those customers who upgraded to MM7 are "satisfied" or "very satisfied" after they upgraded. What they didn't mention how many of the customers that are on version prior to MM7 are dissatisfied with the software and don't intend to upgrade any time soon.
Also, under the headline "MindManager Pro 7 by the numbers" they claim that there are over one million licenses of MindManager around the world. They didn't say whether this is just MM7 or whether this figure includes prior version. I assume the latter.
So overall, I do understand the intention. They would like to get people off old versions onto the current one, and that way they can collect the upgrade fees.
I sent an email to their sales department asking for a 90 days trial license key. I have upgraded to 5 then to 6, each time hoping that the performance issues are resolved. Each time I paid money then I got a license. This time I think it would be more than fair to do it the other way round. I get the license first - time-limited to 90 days - and if their product turns out to be as fantastic as they claim - which it is without doubt! (sarcasm!) - then I'll be happy to pay for the upgrade.
So let's see what they come back with. At the moment my confidence level regarding MindManager isn't very high.
Tuesday, April 01, 2008
Vodafone Vodem on Vista
Vodafone is offering their "Vodem" as one of the options for wireless data connectivity to the internet. In that sense they provide a tool for more agility. And that's the only reason I include my comment in this blog.
Officially the Vodem is supported on Vista, but in practice there are a few issues.
- It doesn't seem to like some screen savers. I can deal with this by trying different ones.
- It doesn't like AnyDVD, which is ok, too, as I can simply switch it off temporarily.
- It doesn't like Hibernate. I don't like this one particularly as I don't want to reboot the machine each time I want to connect to the internet via the Vodem.
- Sometimes it just stops working.
In any of these cases it still claims to be connected to the network but when you do an nslookup it can't find any server. Then you have to reconnect it, and restart "Vodafone Mobile Connect Lite" (Lite on what? Quality?)
Overall I have the impression that the quality of the drivers and firmware needs some serious improvement. With my previous data card I had similar issues under Windows XP and it didn't really improve in the long run.
Then I turned the Vodem around and read Huawei and "Assembled in China". I wonder whether the people at Huawei and/or Vodafone are proud of this product.
Bottom line, Vodafone's Vodem works to some degree on Vista but it is very flaky and they have still a long way to go until the reliability is where it should be. So if you are considering it for your team then you may want to give it a trial for quite some time before deciding on a full roll-out.
Friday, March 28, 2008
I want you to do more!
One thing that has stayed the same during all this time: A customer asking for more. "I want you to do more." And what they mean is: "I want you to increase productivity." (And for the records: A customer could equally be an internal stakeholder, e.g. a product manager.)
The interesting question is, though, how do you measure productivity? The trouble is, if you don't know how your customer measures productivity then how can you improve it? I have approximately 20 or 30 books on metrics in software engineering. Many of them contain attempts to measure the productivity of a software engineer. I am not aware of any metric that really works. If you know of such a metric: Please, I am begging you! Tell me!
An exercise: Let's assume you count lines of code (LOC).... Ok, ok, I got it. We want maintainable code so less is probably better.
Let's try "number of screens". This is more difficult. Do you want to have as much information on one screen as possible? Maybe the screens are used by professionals every day 200 or 300 days a year. In that case, again, less is more!
Let's try "number of stories". Ok, that's an easy one, isn't it? A few years ago a manager asked me how we could motivate the team to deliver more stories (For the less familiar: story = a small piece of work). I thought for a moment and then I said: "Just tell them!" Then he thought for a bit and understood that people would just make the stories smaller and smaller and so we would - on paper - see improvements all the time. In other words "inflation".
Let's try "Defined Scope": I have this statement of work that also includes "The Scope". Let's assume it contains the requirement that on one screen you need a table. For that table you want to reorder the rows because the sequence matters (think of prioritization items for instance). One way to implement that could be that each row gets a unique identifier (or you can select it), and then there are two buttons with "Up" and "Down". Functionality complete! - Well, hang on, wait a second. This is - at least for web applications - what we had about 10 years ago. Today, we need AJAX, and hence at the very least you want the rows to be draggable with the mouse. - Whatever you answer to this is - whether or not the fancy version is in scope or not - the difference between these two versions is about 1:10 in terms of effort.
So where does this take us? It's about scope. And it is about making sure you have the right scope in place. So wouldn't it make sense to precisely define the scope? Absolutely! Have you tried it to define scope in a "waterproof" way? If you were successful please let me know!
Yet another approach is: Just mandate more work to complete. Well, if you don't have family how can you possibly imagine that there are people out there who have family and who do care about their family? How do you know what it feels like if you have to call your partner that you won't be home for dinner? That you won't be home for the weekend? What if your kids start asking who you are? How would you feel? If you are a human being, you know exactly what I'm talking about.
I think all of this is not very constructive. Try to better specify scope is definitely a good thing. Assuming that the scope is completely defined at the beginning of a project - regardless of effort - is in my view a ridiculous assumption. As the project progresses you will discover new items. All IT project, all software projects are so complex that we have to assume that there will be new scope discovered. Do we know how much ahead of time? No we don't. We can try to "guess" as best as possible but there is no guarantee that we will be correct.
So it basically boils down to ensure that all stakeholders are involved, that they always get a sufficient and complete picture of where the project is going. If the remaining capacity is not sufficient for the remaining scope then this needs to be addressed as early as possible, so change management has to kick in.
Creating a blame culture and start finger pointing when coming under pressure won't help make any issue go away. A unilateral mandate or a dictatorial approach will destroy all teams. Requesting respect for one's role and at the same time ignoring a different persons role is destructive and ignorant. It reminds me of the days of the cold war. The Soviet approach was: What mine is is mine, and now let's negotiate about yours. We all know that this approach didn't survive.
Increasing and continuously improving communication and collaboration is the required forward looking approach. Contributing and constructively participating in a proper cross-functional team is where good solutions are created. (And whether or not you use an agile approach doesn't matter at all!)
Saturday, February 23, 2008
No More Iterations?
How would the development approach look like? Wayne Allen describes in his post what he tried with his team.
What would be the benefits?
Well, you might not need to think again about how to assign engineers to teams. Just create you teams once and then give each of them a board with a backlog that has a small number of slots. Once the team has finished one story you can add another one.
You can save the iteration planning sessions thus saving valuable engineering time.
By measuring how many stories each team can process per time unit (think nano-projects?) you might get to the point that you have a metric that you can use to measure through-put.
I haven't tried it yet, but it looks like something I would want to take a closer look at.
Variations on Pair Programming
In this post I'd like to discuss variations on the topic that teams I work with have tried.
For instance, assume you have a large number of new people on the team. Everything is new to them, system design, technology, tools, and process. Where to start? To the newcomer the environment might be an insurmountable road block.
One option is to leave the newcomer on a story until the story is finished, and then swap the experienced person between stories. The benefit of this approach is that the newbie can stay focused on just what is needed to complete this story. The drawback is that the experienced person has almost learn from the start when swapping from one story to the next. Remember that the newbie isn't used yet to giving a 10 minutes run-down of what has happened so far.
Another option is to leave the experienced person on the story and swap the newbies around. Benefit: The experience person becomes the knowledge carrier and ensures consistency in completing the story. The drawback is that the newbie has to start all over again, which might add to the frustration.
A compromise might be to select stories in such a way that newbies can work as a pair by themselves on a simple story. They might be slower but they will make progress. If they get stuck they can get the coach or another experienced developer involved. The pair of newbies can stay on the story until completion.
Pairs that consist of experienced people can continue to work at full speed except when they need to mentor a pair of newbies.
This is by far a complete list. I wanted to show that there are variations on pair programming that are worth trying. And over time you will observe whether it works or whether you want to discuss with your team how to adapt.
Among my teams, there is not one approach that is used by all of them. Different teams use different approaches towards pair programming and inducting newbies. But that is ok as long as the underlying agile principles are followed.
Tuesday, February 12, 2008
Supplyside Agility
But there are many different factors to play with.
For instance if you have created an RFP there are still a lot of different ways how you work with your supplier. For instance you might decide that for a simple (software) component you specify the interface and a number of tests. These might be functional tests only, or they might also include non-functional tests such as performance or load tests. Then you invite different suppliers to bid on that work and based on criteria such as cost, speed, reliability, etc. you commission the work with a particular supplier.
A different option for the same type of work could be that you slice your system or product in such a way that a development partner becomes responsible for an entire functional area. This responsibility would include not only the implementation but also the testing, the user interface design, and the performance engineering. The development partner might also be required to acquire sufficient domain knowledge for that functional area.
Regardless which option you choose, the more options you have the better you can tailor the setup to your needs. Or in other words you adapt the working relationship according to the type of work, the type of supplier, and possibly other factors. Some work you might even consider to put on a site like RentACoder or similar. Other, more complex work you would either do in-house to preserve the capability or to protect your intellectual property.
The more options you have at your disposal, the more adaptible you are, the more agile you will be and the more likely your project to be successful. Agility is not only about JUnit or XP or Scrum. Agile principles can and should be applied to all other areas of project management as well, including the supply side.
Saturday, February 09, 2008
Agile and Free Flow of Information
I like his post very much. However, ... in my post I'd like to discuss one particular aspect that I think needs some further discussion.
Rodney writes: "...the relationship between being fast or Agile and how..." - I think that being fast is not the same as being agile. I think that agile is about adaptability while being fast is about efficiency. Agile is about the ability to adapt for keeping the business fit in an fast-changing environment. Learning about changing environments and learning from past experiences requires the storage, retrieval, and distribution of information.
A lot of companies use paper or it's electronic equivalent for storing and transmitting information. While this has it's benefits for long-term storage - with proper backups it lasts forever - it has also it's drawbacks.
For instance, if I need a piece of information and I can easily recall it from my memory then this is the fastest to obtain it. If I have a colleague sitting next to me who I can ask and he has the information available then that's probably the next best solution. If I have to sit in front of a computer and enter a query I might get pages and pages of search results, even if I am fortunate enough to have a specialized search engine or even if the body of knowledge is stored in a good intra-corporate wiki. So this type of search tends to be very slow.
So depending where information is stored and what kind of access path I have, information is easy or harder to obtain.
But there is at least one more aspect. When two or more people communicate they again can choose different media to exchange information. After all, whatever one person is saying to a different person is nothing more than sending a small piece of information even if the content might be very simplistic. "How are you?" might carry the information about me caring about the other person. Saying "How are you?" in a face-to-face situation has a different effect than saying the same words on the phone, writing the same sentence in a hand written letter, or writing them in an email.
When I work with my people from my team or people from other teams, e.g. stakeholders, suppliers, customers, I am always on the lookout for difference in understanding of different people. When I spot such a difference I try to organize a face-to-face meeting of those people. I try to get them into one room and make them talk to each other.
I also share with my team as much information as I can. Although this certainly has it's limitations where privacy reasons of individuals or confidentiality of company information gets into the game, but it is an important factor for establishing trust.
Here is another way to describe my approach: In order to make my team more agile I try to "grease" the free flow of information. Only if the important information gets to every team member as quickly as possible the team can react in a timely fashion.
The same applies to reporting. I create reports on a weekly basis for the projects I'm responsible for. The stakeholders are informed at the earliest possible time of opportunities and challenges. Giving enough heads-up time with accurate information is probably one of the best things you can do to support your manager and other stakeholders.
So in that sense a lot of the information we circulate in my teams becomes tacit knowledge in the brains of the people.
As that is not always the best way to store information we also employ a wiki for information that we believe is valuable for long-term storage. And there are other media that we use as well such as spreadsheets, pion-boards, white-boards, etc.
So to become an agile organization it is important to use the most appropriate medium and channel for distributing and sharing information. As a collateral it is interesting to see that an organization that is very adaptible - that is agile - is at the same time also very lean and as a consequence very efficient and fast. In contrast a fast organization might be highly efficient and might be able to process service requests, product manufacturing, or software development tasks extremely fast. But if the environment changes that very same organization might have tremendous difficulties to adapt to the new conditions.
There is (at least) one more way of looking at this: Being fast is about optimizing towards efficiency while being agile is about optimizing towards adaptability. Both can be competing objectives at times.
In summary: I believe that an agile organization is very likely to be fast. But a fast organization is not necessarily agile. The other item I learned from practice: Free Flow of Information improves an organizations adaptability.
Wednesday, February 06, 2008
Monitoring and Tracking Projects
The template includes graphs for the actual velocity and a burn down graph. If you have questions and/or suggestions for improving the template please use the contact details at the Agile Utilities web site.
Oh, and here is the most important link, the link to Agile Tracker. And of course this template is free.
Monday, February 04, 2008
Software Quality and Tools
23.8 million hits represents a large number. Apparently there is no shortage of information and tools for improving the software quality. Why is it that we still have significant bugs in many software products (recent examples: here, here, and here)? Why are still many customers dissatisfied with the quality of software (indicators are mentioned here, here, and here)? Well, sometimes customeres might be satisfied because they have such low expectations which in turn were cause by poor quality in the past.
So, how to address this problem? I don't know the complete solution. I do know, though, that the longest journey starts with the first step. And here are two suggestions for what those first couple of steps might be for your organization. Introduce the following two rules:
- Continuous automated build. Make sure your software system, component, etc. is built multiple times a day. Automatically. With feedback to all developers working on the code base. And if the build is doesn't pass (it's broken) then lock down the source control system until the build passes again. This might be quite disruptive at the beginning. But imagine what behavioral changes this rule might cause? For one you will notice that your people will start to be much more adament about the quality of their change set. Who wants to be the reason for a broken build? And then it also triggers off a continuous thinking about how to improve the toolset, the code, the tests, etc. so that it becomes easier for every engineer to produce quality code.
- One automated per bug. Let's assume you are not happy with the number of bugs in your software. What if for every single bug your engineers fix (at least) one automated test has to be added to the automated test suite? What if that automated test reproduces the bug and when the bug is fixed the test passes?
This rule makes most sense if each automated test is added to a test suite that is run as part of the automated build (see rule 1).
With the above rules you start building a comprehensive set of automated tests. Some may say, we have always been doing that. That might be correct, but my experience tells me that some organizations simply run the suite of tests only when they test the entire system, after the "development phase" is complete and just before the new version is shipped.
Also in some cases people claim that rule 2 cannot be used with legacy systems because writing such tests is too expensive. Again, that might be correct. If that is the case it would be an additional indicator for the (lack of) design quality. The system is too hard to test. Enforcing rule 2 will help to enforce a refactoring or redesigning of the system towards better testability. Also, lacking organizational discipline (or time) - bugs are simply fixed without writing an automated test and then simply shipped after a quick manual assessment. This is far from what is ideal.
By adding one automated test at a time to your automated build - now including an automated test portion - your system will increase in quality over time. A single test won't make a difference. But as your test suite increases in size it will cover more and more of your system.
And here is another observation: When a bug is reported which areas do they tend to be in? Typically you'll find them in frequently used areas or in areas that are particularly flaky. By adding automated tests in these arease you actually target those areas where you get the biggest bang for the buck.
Note that the two rules can be used for both new development and legacy code bases. There is no excuse for not even trying to improve the quality. It doesn't require myriads of expensive and complex tools. Simple rules like the above can help improving organizational behavior towards better (software) quality.
Sunday, January 27, 2008
Setbacks
One likely consequence is that people may no longer be as familiar with how your team works. Or in the case that they are in your team for the first time, they don't know at all how your team works.
At the beginning of this month quite a few new people joined my team for the next release cycle. They are all good engineers, keen to deliver good quality. Yet, this week it happened that although the continuous build was broken, a few more commits of the code base were made.
This was definitely not their fault, and it also was resolved very quickly. The team leads chose a good way by simply talking to the people and explaining to them why it is important to stop committing more change sets if the build is broken. Apparently some of them came from a background where this was not as much encouraged as it is in my team.
In my view there are more than one lesson to learn from this:
- If there are new people - either from other teams or new hires - you have to expect setbacks, even if you conduct a proper induction to the new team members.
- By working in a co-located team, and by using pair programming as a default, such setbacks do not go unnoticed for very long.
- If you have helped your team to become self-organized, you don't need to step in. The team will take care of such issues themselves. They will find good approaches to resolve such issue immediately.
So, don't be afraid of such setbacks. It pays off to have the courage - one of the agile values - to delegate decisions to the team. They become much more self-sufficient, and setbacks are handled much faster and without your intervention.
I think my team did a great job here. Although they were not "hoping" that something like this would happen, they behaved exactly as expected. They simply helped the new team members to learn and that way to become even better team members.
Wednesday, January 23, 2008
A Nice Story On Estimates
Manager: "I have this piece of work. How long does it take you to complete this?"
Engineer: "It'll take me about 2 months."
Manager: "That's great but let's push the envelope. I think you can do it in 1.5 months."
This is overruling the estimates, and in my experience a big no-no. It is ok and depending on circumstances even helpful to ask guiding questions. E.g. you might want to ask: "What if we implemented this using [xyz] ?" or "Did you consider [abc]?"
Guiding questions help avoiding over-engineering or misunderstandings in the first place.
Today I experienced a different dialog. The manager in the following scene was me:
Manager: "I have this piece of work. How long do you think will it take to complete this?"
Engineer: "It will take about 2 months."
Manager: "Ok, so to be on the safe side, should we say it takes 3 months?"
Engineer: "It will take about 2 months."
And then he went deep into the details to explain to me what tasks would be required for the job.
Bottom line: I observed something extremely positive here. The engineer insisted on his estimate, and I think he was right. He was the person closest to the problem and with the best expertise to make the call. How could I dare to question this if I didn't have better arguments?
Ben, who was the engineer in this particular case, showed me that the above rule also works the other way round. Don't just change the estimates if your best engineers give you an estimate. And this applies for both, decreasing and increasing the estimates. Thank you, Ben! I shouldn't have questioned your judgement in the first place.
Thursday, January 17, 2008
Impact and Mitigation of Staff Turnover
Example. For one release cycle a project team signed up for a backlog of 70 stories. Two weeks into the release cycle (12 weeks in total) the team got together and identified a few stories that were too large to fit into one iteration (1 week). As a consequence the backlog grew to 80 stories. The team asked the customer to re-prioritize the stories to identify the low priority stories that would have to be moved to a future release. Not surprisingly, the customer wasn't happy about this as it was perceived as "falling behind schedule". On closer investigation it turned out that the initial backlog contained stories that were too big for acceptance. These stories should have been split or re-scoped before the release cycle started. The person who would have spotted this had left the company just a few months ago. Other team members weren't sufficiently aware of the impact large stories could have. The team learned their lesson and it is unlikely that the same mistake will be repeated.The important lesson to be learned is: The consequences of the departure of a staff member become apparent only as time passes, and sometimes this can be much, much later.
How could the mistake from the example be avoided? I think that it is important to re-emphasize the rules of the game as often as possible. You might sound like a "broken record". But consider the consequences if you don't repeat "the tune"...
The example also demonstrates that when a team member leaves, some knowledge and skills disappear as well. What are the options to mitigate the impact?
Agile methodologies provide quite a few techniques and tools that you can use. In particular all techniques and tools that foster communication, collaborations, and learning are suited to reduce or mitigate the impact to a large degree.
Release planning involving a good cross cut of different roles - developers, testers, customers, performance experts, technical writers, etc. - ensures that all relevant aspects are considered. Techniques like Agile Auction (aka Planning Poker (TM)) help to detect uncertainties or the need for further discussions.
Frequent reviews help to identify over- or under-engineering. Do you really need all these bells and whistles? Are the features implemented in a really compelling way?
Techniques and tools like Pair Programming, wikis and Show-and-Tells help fostering knowledge transfer and the acquisition of skills.
Keep it simple and try tons of different tools and techniques. Don't give up just because one tool or technique didn't work. If it didn't work, try something else. Learn from your observations. And always challenge your team for instance by asking questions like:
- Given this unsatisfying result can you think of a way how we can prevent a similar situation in the future?
- Given the suggested approach, can we think of an even better way of doing this?
Monday, January 07, 2008
Reasons for Staff Turnover
People leave for different reasons. In some cases it might actually be the best option for both the person and the team, if the person is leaving. Sometimes people simply prefer a different working style that is not compatible with that of the team.
A few years ago I had a team member who preferred to work alone and who preferred to work on very complex items. He saw these tasks as unique opportunities and challenges. He was an excellent engineer. The solutions worked and when you took a closer look they turned out to be excellent solulutions. Almost beautiful. However, the person was very introverted and he wasn't able to fully leverage his potential to the rest of the team. In addition the code he created despite being excellent from an engineering perspective as too complicated for the average engineer. The solution was complex, elegant, and still hard to understand as some of the concepts were too abstract for some people. In the end he decided to move on to a different company. It turned out a good development. He was better off because he moved into an environment more suitable for his working style, and the team was better off because the proliferation of the highly abstract code was avoided.
And even more years ago, I had a team member who wasn't able to adopt a new programming paradigm. It was when we introduced object-oriented programming. That was in the early 90s. The engineer tried for many months to come up to speed. Finally he and his manager had to realize that he wouldn't be able to make the required mindshift. We then decided to try finding a different role. Fortunately we found a solution that was beneficial for both the team and the engineer. Instead of simply developing software, he took on a role that contained elements from marketing and presales support. We preserved his experience for the company and at the same time opened up a development path for the struggling engineer.
Fortunately when people leave most of them leave for good reasons. For instance people may choose to move to a different country. Here in New Zealand it is part of the life style to go on an overseas experience (OE). Over several years I saw people leaving for Europe, US, Canada, Singapur, Australia.
Another reason to leave might be that people would like to pursuit a career that is not available within the current company. Sure, you probably would keep people within the company but sometimes it is simply not possible.
Compensation might be a reason as well. My personal preference would be that people don't leave just because of the compensation package. And similarly I don't want people stay just because of the compensation package. I guess I am trying to say that you should be very careful with the compensation package. You don't want just the geniuses (see example above) but on the other hand you don't want to pay just bananas and peanuts either (otherwise you shouldn't be surprise if you are surrounded by monkeys). Compensation is a hygiene factor. You have to have an eye on it.
In my next post I'll explore a little more on the impacts of staff turnover and how to mitigate them using agile principles and techniques. I wish you a Happy and successful New Year!