When you work you most likely will go through a reorganization at some point in your career. It seems that in some companies it is almost a habit to roll out the next reorg before the current one is complete.
I often wondered whether frequent reorgs are a good thing or a bad thing. Until a few years ago I thought that because people are affected reorgs should be kept to a minimum. After all there is always someone who looses and someone who wins in a reorg, isn't it?
My team has helped me to learn better. The current project is now almost 20 months old, and I can't recall all the changes we had in our internal team structure. Many different factors caused us to change the team structure. Let me highlight just a few of them.
Scope: While the overall project objective wasn't modified a lot, the plan for achieving it changed a lot. Starting with a huge legacy code base (over 3 millions LOCs) we thought that using tools was a good idea to move to a pure Java implementation. So we organized the team in several smaller groups, each of which attacked the problem from a slightly different angle. For instance one group focused on how the code base could be restructure, and a different group was looking into converting screens from an old implementation to the new one. In doing this we learned that the automatic conversion wouldn't work for us for various reasons. So we decided for a new plan, and luckily we had the management support for now rewriting the application from scratch. Not an easy thing to do for a small company! As a consequence we changed the structure. I can't recall any major issues on the people side. The team understood that we had to adapt how we worked.
Team size: In August 2006 our little company was acquired by First Data Corporation. After the initial issues that typically follow an acquisition were solved, it was decided to increase the size of our team, and we started hiring. At that point the team consisted of two groups and each of them worked mostly independent, but we rotated engineers once in a while to keep the architecture and design of the system simple, and also to promote reuse across the groups. With the increasing team size we have realized that we had to change again, and therefore we regrouped again. Now some of the groups are almost too small (just one or two pairs). However, we have currently 16 openings (see First Data Utilities' website) and so the groups will eventually have a good size again. It's too early yet to tell whether the new setup will work for us.
Team maturity: Initially we had three co-located subject matter experts. Over time as the team matured and the interaction between the subject matter experts and the software engineers intensified, we realized that the subject matter experts were becoming overloaded, effectively a bottleneck. So we added two more, and we plan for another one or two over the next 12 months.
These were only three examples of what leads us to adapt over time. In essence, we reorganized as necessary and we are much more flexible than in companies I worked with previously. So agile teams seem to be different in that sense too: They continuously search for better ways to develop software. This teams has proved that it is capable to not only adapt architecture, design, toolsets, process, and technology. They have also developed the capability of continuously assessing the team structure, trying something new, discarding things that didn't work, and keeping things that did.
In closing I'd like to encourage you, the agile leader, to try this with your team as well. It's not easy initially, but it works and pays off!
Saturday, December 02, 2006
Tuesday, October 24, 2006
Are We Using The Wrong Tools?
Frequently I'm asked why software development can't be faster, cheaper, etc. and actually today I found some information on the internet that really made me think whether we are using the proper tools.
Now I wonder why we have software engineers in the first place....
"The new features of the ... development environment allows developers ... to ... automatically generating 100 percent of their applications."
(Source: Oracle Investor Relations)
Now I wonder why we have software engineers in the first place....
Thursday, September 21, 2006
How does working in an agile team feel like?
Sometimes people ask me what the difference is of working in a team that uses agile methodologies as opposed to traditional approaches. Well, that's hard to explain, but maybe the following helps to explain it a bit.
One team I work with collects quotes from their members on their team wiki. I'd like to share some of the quotes with you as they reflect the culture and mutual trust among the people working in that team. To protect the individuals, I have left out the names of them.
One team I work with collects quotes from their members on their team wiki. I'd like to share some of the quotes with you as they reflect the culture and mutual trust among the people working in that team. To protect the individuals, I have left out the names of them.
- "I would be very disappointed if [fill in a name] was that smart." (engineer)
- "I haven't done any work for quite a while." (engineer)
- "I have no idea, I'm just the story bitch." (on-site customer)
- "Think of your best pairing experience." (consultant/coach)
- "It's not really broken... There are different shades of broken." (engineer)
- "All services are topless." (engineer regarding Service-Oriented Architecture)
- "If you're ok to work by yourself then I will pair with you." (engineer)
- "It's not a random failure, it's a semi-consistent failure." (engineer)
- "I can think all I like, it still isn't going to get us anywhere." (on-site customer)
Sunday, August 20, 2006
Working With Suppliers
Working with suppliers can be daunting. In one project however, we applied agile principles and achieved very good results.
Initially we discussed a traditional contract but in the course of the talks it turned out that all participants wanted to have something more flexible. So we put only the basics into the contract, e.g. rates, time frame, responsibilities, etc. With regards to the scope we only roughly described the subject but left the rest open. Principally the contract was time and material based, so admittedly required some basic trust.
Architecturally we were in the fortunate position that what the supplier was to implement could be "isolated" in a separate module with a clearly defined interface. So, we created the interfaces definition which was ultimately a source file plus a set of automated tests against those interfaces. Then the supplier knew exactly what was needed. If all tests pass he would know that he was finished.
If we would find that something didn't work as expected we would then augment the test suite and send it back to the supplier. He would then come back with an estimate of the cost and we would then decide whether we wanted it for that price. In some cases we negotiated, e.g. by asking what it would take to reduce the price, e.g. by slightly modifying the requirements (automated tests and/or interfaces).
Collaboration was improved by sending one of our engineers to the supplier once in a while and viceaversa. This, too, improved trust and communication and working together was much smoother.
Overall, we worked with that supplier for more than 2 years and the results were excellent. Trust was substantially improved and a required ingredient for the success. So in this particular instance, working with a supplier was almost as working with a different department in the same company.
Initially we discussed a traditional contract but in the course of the talks it turned out that all participants wanted to have something more flexible. So we put only the basics into the contract, e.g. rates, time frame, responsibilities, etc. With regards to the scope we only roughly described the subject but left the rest open. Principally the contract was time and material based, so admittedly required some basic trust.
Architecturally we were in the fortunate position that what the supplier was to implement could be "isolated" in a separate module with a clearly defined interface. So, we created the interfaces definition which was ultimately a source file plus a set of automated tests against those interfaces. Then the supplier knew exactly what was needed. If all tests pass he would know that he was finished.
If we would find that something didn't work as expected we would then augment the test suite and send it back to the supplier. He would then come back with an estimate of the cost and we would then decide whether we wanted it for that price. In some cases we negotiated, e.g. by asking what it would take to reduce the price, e.g. by slightly modifying the requirements (automated tests and/or interfaces).
Collaboration was improved by sending one of our engineers to the supplier once in a while and viceaversa. This, too, improved trust and communication and working together was much smoother.
Overall, we worked with that supplier for more than 2 years and the results were excellent. Trust was substantially improved and a required ingredient for the success. So in this particular instance, working with a supplier was almost as working with a different department in the same company.
Saturday, August 19, 2006
Dealing With Uncertainty in Stories
Extreme Programming uses stories to define the scope of a system. Sometimes teams have difficulties to estimate the size of stories with a satisfying accuracy. Different approaches to address this issue are possible, e.g. making stories even smaller.
Another approach is to look for the elements in a story that represent the risk. Then these elements can be addressed in a time-boxed spike (or experiment). The remainder stays thus becomes a low risk story.
Risky elements may be new technologies that need to be tried first, or a cool new user interface widget. An example of the later could mean that you might want to implement a simple (maybe even un-cool) version of the interface first. And separately you play the story with the spike for the cool widget. That way you deliver value while at the same time pushing the envelope in a time-boxed fashion. If the outcome of the spike is good then you can play a third story to improve the user interface.
With this approach I have seen different teams addressing risk, and at the same time improve their estimation skills. There are certainly more techniques. If you know of another simple way to deal with this I would be interested to hear from you.
Another approach is to look for the elements in a story that represent the risk. Then these elements can be addressed in a time-boxed spike (or experiment). The remainder stays thus becomes a low risk story.
Risky elements may be new technologies that need to be tried first, or a cool new user interface widget. An example of the later could mean that you might want to implement a simple (maybe even un-cool) version of the interface first. And separately you play the story with the spike for the cool widget. That way you deliver value while at the same time pushing the envelope in a time-boxed fashion. If the outcome of the spike is good then you can play a third story to improve the user interface.
With this approach I have seen different teams addressing risk, and at the same time improve their estimation skills. There are certainly more techniques. If you know of another simple way to deal with this I would be interested to hear from you.
Labels:
adapt,
estimation,
Extreme Progamming,
risk,
spike,
story,
uncertainty,
XP
Thursday, August 17, 2006
Cards Or Planning Tools?
In one of the newsgroups I monitor there was recently a discussion on Project Planning and Tracking Tools. I'd like to add another experience to this.
In a project for which I started coaching earlier last year, I introduced index cards for stories initially. People looked at it as something very "unprofessional". Paper and pen? How could that possibly work?
Well after a while the team decided that they wanted to introduce XPlanner. After yet a while some customers on the team decided that they were not exactly getting what they were asking for. As a consequence they started to put more details into the tool. Sometimes there were a lot of details even for how to lay out widgets on a screen and what colors to use. Screenshots of mock-ups were eventually added.
Yet another while later the team discovered that this didn't really help. Now the stories were very big and therefore hard to put into a short iteration. And the bigger they were the higher the likelyhood that they were underestimated. An extreme case was a story that was estimated at 47 units but which took 228 units in the end to implement. It actually consisted of a sequence of four stories.
It was clear to them that they had a problem. Now they have reverted back to using index cards. Writing or even drawing on an index card limits the amount of information you put on it. The team decided that the best and most important information on a card was the business value that the completed story tries to provide.
Having less details on the cards themselves has triggered more and better conversations or negotiations between the customer and the engineers. More options are discussed and it is much easier to move storie around. Estimation is better, and the geam gained more flexibility with regards to how to implement a story.
Bottom line: You might have an excellent justification to introduce a software based planning and tracking tool. But sometimes it is better to just go using simpler tools. If your team is co-located you certainly have more options, but even for distributed teams it is worthwhile to look for alternatives. Simple tools may lead to simpler and better solutions.
In a project for which I started coaching earlier last year, I introduced index cards for stories initially. People looked at it as something very "unprofessional". Paper and pen? How could that possibly work?
Well after a while the team decided that they wanted to introduce XPlanner. After yet a while some customers on the team decided that they were not exactly getting what they were asking for. As a consequence they started to put more details into the tool. Sometimes there were a lot of details even for how to lay out widgets on a screen and what colors to use. Screenshots of mock-ups were eventually added.
Yet another while later the team discovered that this didn't really help. Now the stories were very big and therefore hard to put into a short iteration. And the bigger they were the higher the likelyhood that they were underestimated. An extreme case was a story that was estimated at 47 units but which took 228 units in the end to implement. It actually consisted of a sequence of four stories.
It was clear to them that they had a problem. Now they have reverted back to using index cards. Writing or even drawing on an index card limits the amount of information you put on it. The team decided that the best and most important information on a card was the business value that the completed story tries to provide.
Having less details on the cards themselves has triggered more and better conversations or negotiations between the customer and the engineers. More options are discussed and it is much easier to move storie around. Estimation is better, and the geam gained more flexibility with regards to how to implement a story.
Bottom line: You might have an excellent justification to introduce a software based planning and tracking tool. But sometimes it is better to just go using simpler tools. If your team is co-located you certainly have more options, but even for distributed teams it is worthwhile to look for alternatives. Simple tools may lead to simpler and better solutions.
Labels:
Agile Tools,
planning,
simplicity,
tools,
tracking
Monday, August 14, 2006
Question from Java Certification Test
I found this question at www.javaprepare.com.
I think this confirms my concern that someone who is "certified" does not necessarily have to be an expert on the subject he/she has been certified.
My suggestion to an agile leader is therefore: Ignore the certifications and focus on the real output of your candidates. Use auditions, for example design sessions or pair programming, to assess the real skills of your candidates.
I don't believe that certifications will help you to determine whether a candidate has excellent social skills. And I also don't believe that this kind of questions helps to determine whether a candidate is able to find novel solutions.
At best the candidate can repeat previously learned canned answers. Is this what you are looking for?
Which of these is a correct fragment within the web-app element of deployment descriptor. Select the one correct answer.Does this ask for the abilities for writing correct XML files, or does it ask for whether someone understands how to specify an error page in the deployment descriptor (web.xml) of a Java web application.
1. <exception> <exception-type> mypackage.MyException</exception-type> <location> /error.jsp</location> </exception>
2. <error-page> <exception-type> mypackage.MyException</exception-type> <location> /error.jsp</location> </error-page>
3. <error-page> <exception> mypackage.MyException </exception-type> <location> /error.jsp </location> </error-page>
4. <error-page> <exception-type> mypackage.MyException</exception-type> </error-page>
5. <error-page> <servlet-name> myservlet</servlet-name> <exception-type> mypackage.MyException</exception-type> </error-page>
6. <exception> <servlet-name> myservlet</servlet-name> <exception-type> mypackage.MyException</exception-type> </exception>
I think this confirms my concern that someone who is "certified" does not necessarily have to be an expert on the subject he/she has been certified.
My suggestion to an agile leader is therefore: Ignore the certifications and focus on the real output of your candidates. Use auditions, for example design sessions or pair programming, to assess the real skills of your candidates.
I don't believe that certifications will help you to determine whether a candidate has excellent social skills. And I also don't believe that this kind of questions helps to determine whether a candidate is able to find novel solutions.
At best the candidate can repeat previously learned canned answers. Is this what you are looking for?
Tuesday, August 08, 2006
Quote From Today's Scrum
I thought you'd like the following quote from a daily Scrum I attended today:
"If you click the add or delete button in a table, and nothing happens, something is wrong." (Author to remain unknown)
Monday, August 07, 2006
Two Hour Technical Task Too Much?
Our recruitment process includes a programming task, which we give candidates as a preparation for the first one hour interview. The task is typically very simple and it is possible to complete it within roughly two hours. The intention is to assess whether a candidate is a) able to solve a simple task independently, b) able to demonstrate know of how to ensure quality (e.g. automated tests), and c) willing to invest that time for joining a team of highly skilled people.
In over 100 applications in the recent months we have never had any complaint by the candidates. Instead we see good results with regards to the performance of the successful candidates.
However, there was one case where a candidate sent an almost "rude" response after he received the programming task:
In this particular case we decided to not invite the candidate based on him not being willing to comply with some simple rules which are part of the policy of our company.
I wonder who else is using technical tasks as part of the recruitment process. It would be great if you would be willing to share your experiences.
In over 100 applications in the recent months we have never had any complaint by the candidates. Instead we see good results with regards to the performance of the successful candidates.
However, there was one case where a candidate sent an almost "rude" response after he received the programming task:
"IMHO only a fool will spend 2 hours sitting for a technical test without the employer having gone through some trouble to have a pre-selection round first."And later he added
"I happen to be not only adept on the technical front, but also socially observant and capable of making (mostly correct) strategic decision."Do we expect too much? I know of several other companies who use technical tasks for engineering positions as well, for example ThoughtWorks.
In this particular case we decided to not invite the candidate based on him not being willing to comply with some simple rules which are part of the policy of our company.
I wonder who else is using technical tasks as part of the recruitment process. It would be great if you would be willing to share your experiences.
Labels:
people management,
principles,
quality,
recruiting
Saturday, August 05, 2006
Customers Writing Executable Specifications
About two years ago I worked with a team which had to live (or suffer?) from requirements specifications with hundreds of pages of prose. This kind of document is hard to read, understand, and maintain. If you want to change it, and your customer is external, you have to follow a change management process which typically includes exchanging and having drafted and approved even more documents.
Ideally, requirements would be written in a way that makes it easy for the development team to verify whether the system satisfies them. So why not making requirements executable? That way your team members can run them as often as needed, and they could stop immediately once the system passes those tests.
The way to go forward is therefore executable specifications, or story tests. The most prominent thought leader on this is probably Rick Mugridge, on whose web site you can also find further information.
One tool to create and maintain such customer tests is Fitnesse. It is available for many different languages.
The benefits are very compelling. You have a much tighter link between your customer (might also be a product manager) and your development team. Executable specifications are a means for improving communication. You also get a tool that reduces the gap between what your customer wants the system to do, and what the system really does.
From my experience, it's worth playing with this concept. It might turn out to be an excellent addition to your toolbox for agile project management.
Ideally, requirements would be written in a way that makes it easy for the development team to verify whether the system satisfies them. So why not making requirements executable? That way your team members can run them as often as needed, and they could stop immediately once the system passes those tests.
The way to go forward is therefore executable specifications, or story tests. The most prominent thought leader on this is probably Rick Mugridge, on whose web site you can also find further information.
One tool to create and maintain such customer tests is Fitnesse. It is available for many different languages.
The benefits are very compelling. You have a much tighter link between your customer (might also be a product manager) and your development team. Executable specifications are a means for improving communication. You also get a tool that reduces the gap between what your customer wants the system to do, and what the system really does.
From my experience, it's worth playing with this concept. It might turn out to be an excellent addition to your toolbox for agile project management.
Saturday, July 29, 2006
APLN
The Agile Project Leadership Network is a non-profit organization which connects all leaders with an interest in agile techniques. I just joined it as I happened to be at the leadership summit, which took place during the Agile 2006 Conference in Minneapolis. I'm still amazed about the high caliber of the people I met. It is a lot of fun to exchange experiences. Despite having facilitated the transition of multiple R&D organizations sized 10 to 200 engineers to agile methodologies, there was so much (And I guess there is much more) that I learned during this conference and the leadership summit. Thank you to all the other attendees! You are great people!
Subscribe to:
Posts (Atom)