Saturday, February 23, 2008
No More Iterations?
How would the development approach look like? Wayne Allen describes in his post what he tried with his team.
What would be the benefits?
Well, you might not need to think again about how to assign engineers to teams. Just create you teams once and then give each of them a board with a backlog that has a small number of slots. Once the team has finished one story you can add another one.
You can save the iteration planning sessions thus saving valuable engineering time.
By measuring how many stories each team can process per time unit (think nano-projects?) you might get to the point that you have a metric that you can use to measure through-put.
I haven't tried it yet, but it looks like something I would want to take a closer look at.
Variations on Pair Programming
In this post I'd like to discuss variations on the topic that teams I work with have tried.
For instance, assume you have a large number of new people on the team. Everything is new to them, system design, technology, tools, and process. Where to start? To the newcomer the environment might be an insurmountable road block.
One option is to leave the newcomer on a story until the story is finished, and then swap the experienced person between stories. The benefit of this approach is that the newbie can stay focused on just what is needed to complete this story. The drawback is that the experienced person has almost learn from the start when swapping from one story to the next. Remember that the newbie isn't used yet to giving a 10 minutes run-down of what has happened so far.
Another option is to leave the experienced person on the story and swap the newbies around. Benefit: The experience person becomes the knowledge carrier and ensures consistency in completing the story. The drawback is that the newbie has to start all over again, which might add to the frustration.
A compromise might be to select stories in such a way that newbies can work as a pair by themselves on a simple story. They might be slower but they will make progress. If they get stuck they can get the coach or another experienced developer involved. The pair of newbies can stay on the story until completion.
Pairs that consist of experienced people can continue to work at full speed except when they need to mentor a pair of newbies.
This is by far a complete list. I wanted to show that there are variations on pair programming that are worth trying. And over time you will observe whether it works or whether you want to discuss with your team how to adapt.
Among my teams, there is not one approach that is used by all of them. Different teams use different approaches towards pair programming and inducting newbies. But that is ok as long as the underlying agile principles are followed.
Tuesday, February 12, 2008
Supplyside Agility
But there are many different factors to play with.
For instance if you have created an RFP there are still a lot of different ways how you work with your supplier. For instance you might decide that for a simple (software) component you specify the interface and a number of tests. These might be functional tests only, or they might also include non-functional tests such as performance or load tests. Then you invite different suppliers to bid on that work and based on criteria such as cost, speed, reliability, etc. you commission the work with a particular supplier.
A different option for the same type of work could be that you slice your system or product in such a way that a development partner becomes responsible for an entire functional area. This responsibility would include not only the implementation but also the testing, the user interface design, and the performance engineering. The development partner might also be required to acquire sufficient domain knowledge for that functional area.
Regardless which option you choose, the more options you have the better you can tailor the setup to your needs. Or in other words you adapt the working relationship according to the type of work, the type of supplier, and possibly other factors. Some work you might even consider to put on a site like RentACoder or similar. Other, more complex work you would either do in-house to preserve the capability or to protect your intellectual property.
The more options you have at your disposal, the more adaptible you are, the more agile you will be and the more likely your project to be successful. Agility is not only about JUnit or XP or Scrum. Agile principles can and should be applied to all other areas of project management as well, including the supply side.
Saturday, February 09, 2008
Agile and Free Flow of Information
I like his post very much. However, ... in my post I'd like to discuss one particular aspect that I think needs some further discussion.
Rodney writes: "...the relationship between being fast or Agile and how..." - I think that being fast is not the same as being agile. I think that agile is about adaptability while being fast is about efficiency. Agile is about the ability to adapt for keeping the business fit in an fast-changing environment. Learning about changing environments and learning from past experiences requires the storage, retrieval, and distribution of information.
A lot of companies use paper or it's electronic equivalent for storing and transmitting information. While this has it's benefits for long-term storage - with proper backups it lasts forever - it has also it's drawbacks.
For instance, if I need a piece of information and I can easily recall it from my memory then this is the fastest to obtain it. If I have a colleague sitting next to me who I can ask and he has the information available then that's probably the next best solution. If I have to sit in front of a computer and enter a query I might get pages and pages of search results, even if I am fortunate enough to have a specialized search engine or even if the body of knowledge is stored in a good intra-corporate wiki. So this type of search tends to be very slow.
So depending where information is stored and what kind of access path I have, information is easy or harder to obtain.
But there is at least one more aspect. When two or more people communicate they again can choose different media to exchange information. After all, whatever one person is saying to a different person is nothing more than sending a small piece of information even if the content might be very simplistic. "How are you?" might carry the information about me caring about the other person. Saying "How are you?" in a face-to-face situation has a different effect than saying the same words on the phone, writing the same sentence in a hand written letter, or writing them in an email.
When I work with my people from my team or people from other teams, e.g. stakeholders, suppliers, customers, I am always on the lookout for difference in understanding of different people. When I spot such a difference I try to organize a face-to-face meeting of those people. I try to get them into one room and make them talk to each other.
I also share with my team as much information as I can. Although this certainly has it's limitations where privacy reasons of individuals or confidentiality of company information gets into the game, but it is an important factor for establishing trust.
Here is another way to describe my approach: In order to make my team more agile I try to "grease" the free flow of information. Only if the important information gets to every team member as quickly as possible the team can react in a timely fashion.
The same applies to reporting. I create reports on a weekly basis for the projects I'm responsible for. The stakeholders are informed at the earliest possible time of opportunities and challenges. Giving enough heads-up time with accurate information is probably one of the best things you can do to support your manager and other stakeholders.
So in that sense a lot of the information we circulate in my teams becomes tacit knowledge in the brains of the people.
As that is not always the best way to store information we also employ a wiki for information that we believe is valuable for long-term storage. And there are other media that we use as well such as spreadsheets, pion-boards, white-boards, etc.
So to become an agile organization it is important to use the most appropriate medium and channel for distributing and sharing information. As a collateral it is interesting to see that an organization that is very adaptible - that is agile - is at the same time also very lean and as a consequence very efficient and fast. In contrast a fast organization might be highly efficient and might be able to process service requests, product manufacturing, or software development tasks extremely fast. But if the environment changes that very same organization might have tremendous difficulties to adapt to the new conditions.
There is (at least) one more way of looking at this: Being fast is about optimizing towards efficiency while being agile is about optimizing towards adaptability. Both can be competing objectives at times.
In summary: I believe that an agile organization is very likely to be fast. But a fast organization is not necessarily agile. The other item I learned from practice: Free Flow of Information improves an organizations adaptability.
Wednesday, February 06, 2008
Monitoring and Tracking Projects
The template includes graphs for the actual velocity and a burn down graph. If you have questions and/or suggestions for improving the template please use the contact details at the Agile Utilities web site.
Oh, and here is the most important link, the link to Agile Tracker. And of course this template is free.
Monday, February 04, 2008
Software Quality and Tools
23.8 million hits represents a large number. Apparently there is no shortage of information and tools for improving the software quality. Why is it that we still have significant bugs in many software products (recent examples: here, here, and here)? Why are still many customers dissatisfied with the quality of software (indicators are mentioned here, here, and here)? Well, sometimes customeres might be satisfied because they have such low expectations which in turn were cause by poor quality in the past.
So, how to address this problem? I don't know the complete solution. I do know, though, that the longest journey starts with the first step. And here are two suggestions for what those first couple of steps might be for your organization. Introduce the following two rules:
- Continuous automated build. Make sure your software system, component, etc. is built multiple times a day. Automatically. With feedback to all developers working on the code base. And if the build is doesn't pass (it's broken) then lock down the source control system until the build passes again. This might be quite disruptive at the beginning. But imagine what behavioral changes this rule might cause? For one you will notice that your people will start to be much more adament about the quality of their change set. Who wants to be the reason for a broken build? And then it also triggers off a continuous thinking about how to improve the toolset, the code, the tests, etc. so that it becomes easier for every engineer to produce quality code.
- One automated per bug. Let's assume you are not happy with the number of bugs in your software. What if for every single bug your engineers fix (at least) one automated test has to be added to the automated test suite? What if that automated test reproduces the bug and when the bug is fixed the test passes?
This rule makes most sense if each automated test is added to a test suite that is run as part of the automated build (see rule 1).
With the above rules you start building a comprehensive set of automated tests. Some may say, we have always been doing that. That might be correct, but my experience tells me that some organizations simply run the suite of tests only when they test the entire system, after the "development phase" is complete and just before the new version is shipped.
Also in some cases people claim that rule 2 cannot be used with legacy systems because writing such tests is too expensive. Again, that might be correct. If that is the case it would be an additional indicator for the (lack of) design quality. The system is too hard to test. Enforcing rule 2 will help to enforce a refactoring or redesigning of the system towards better testability. Also, lacking organizational discipline (or time) - bugs are simply fixed without writing an automated test and then simply shipped after a quick manual assessment. This is far from what is ideal.
By adding one automated test at a time to your automated build - now including an automated test portion - your system will increase in quality over time. A single test won't make a difference. But as your test suite increases in size it will cover more and more of your system.
And here is another observation: When a bug is reported which areas do they tend to be in? Typically you'll find them in frequently used areas or in areas that are particularly flaky. By adding automated tests in these arease you actually target those areas where you get the biggest bang for the buck.
Note that the two rules can be used for both new development and legacy code bases. There is no excuse for not even trying to improve the quality. It doesn't require myriads of expensive and complex tools. Simple rules like the above can help improving organizational behavior towards better (software) quality.