Lightweight Experiments for Process Improvement

[This post is a recap on the second talk I gave at XP2010. This was the big one, the experience report talk, one of 15 experience reports published at XP2010. You can download the full paper (pdf) from this website or from XP2010.org.]

Process improvement is important for nearly all teams but it can sometimes be difficult for a team to know what is working, what isn’t working, and what techniques or methods to try when attempting to improve. Performing a scientific experiment is one way help overcome these problems but as academic research has shown us, while experimentation can yield interesting results, running an experiment is time consuming, expensive, and requires some serious thinking and control to pull off. From a practitioner’s standpoint this means that experimentation is a non-starter.

Of course, that’s only if you run experiments like an academic.


Back Story

Just over a year ago, my MSE studio team at Carnegie Mellon had a problem. We had decided we would use Extreme Programming for the construction phase of our project but some team members had doubts concerning pair programming. We had decided that we would use some kind of peer review, having already seen the many benefits of inspection when reviewing other artifacts. The dispute arose over whether pair programming would give similar enough results. Also, not all team members had experience with pair programming but everyone on the team knew and enjoyed solo programming.

The number one concern was whether pair programming would allow us to meet our very strict deadline. We had just over three months to complete the construction phase of the project. According to our threshold of success this meant implementing all "must have" requirements with a minimum level of quality. Did we really have time to waste by having two people working on the same code at the same time? Wouldn’t working independently and inspecting code on an as needed basis allow us to get more work done faster?

At the time it just so happened that I was taking a reading class with Mary Shaw and in that class we discussed some research findings that might help settle this debate. Research from Laurie Williams, Ward Cunningham, Barry Boehm, and many others showed that pair programming requires more effort (although never double the effort) but is faster than programming alone (pdf). Also pair programming creates code of about the same quality as coding alone with inspection (pdf). Of course, the research may not apply to us since Square Root is closer to a professional team working on a large project with a real client, not undergrads working on short term toy projects.

After an iteration where some teammates used pair programming and others refused, we decided to try an experiment to see which practice actually worked better. The original idea was that we might be able to validate some of the research but decided instead that it was more important just to resolve our own internal conflicts and figure out which processes worked better.

Conducting a Lightweight Experiment


With the scientific method as our guide we planned and executed a lightweight experiment which pitted programming alone against pair programming. The results were amazing (and you can find the raw data in our project archive). In conducting the experiment we used a set of novel techniques which I think can be useful in conducting other lightweight experiments. There's more background in the experience report so I'm only putting the meaty stuff in this post.

Focus narrowly on a single question - The essential key to keeping an experiment light is to only tackle one thing at a time. In this case we focused on comparing and contrasting a single technique, pair programming, rather than multiple techniques or an entire process (such XP vs. TSP).

Divide work, not teams - If I were comparing pair programming to programming alone in an academic setting, I would put together two teams of about the same experience and have them each build their own version of the same software, one team using pair programming, the other programming alone. In a business setting this is a complete waste and few companies can afford to have two teams duplicating effort. By dividing work instead of teams you may lose some control over variables in the experiment but in most cases isolating more variables doesn’t add any further clarity to helping answer the narrowly focused question. To divide work successfully you need to have some way of estimating work units for division. We used use case points as shown in the figure depicting our modified planning game.



Continue making releases - Since we still needed to make a comparison, rather than dividing into teams and duplicating effort we divided the features that were released each iteration. In this way we built about half the features released during an iteration using each technique. Working on about half the features using pair programming meant that at least some features were being built by individuals. At the time this was a risk reduction decision to make sure that if pair programming completely failed we’d still have something to ship at the end of the iteration. Explicitly managing risks is the only way to know if the lightweight experiment may cause problems for making releases. Also, we had a strictly defined cut-off for stopping the experiment if it ever stopped us from shipping to our client.

Use the data you have - In almost all cases we were able to get the data we needed to evaluate our hypothesis from our current process. When we couldn’t, we only had to make minor modifications to our data collection practices, for example adding a check box to our SharePoint server for indicating whether a task was paired or individual.

One of the more interesting things we did was to create a "tally sheet" for collecting pair programming issue detection statistics in real time, as the issues were discovered. Given the near instantaneous code-inspect-fix cycle when programming in pairs, this was the only way to collect similar data for comparing pair programming to inspection.



Statistical significance is overrated - The whole point of running a lightweight experiment is to collect just enough data to help you make a better decision or validate your gut feeling. This technique is not meant for uncovering universal truths or proving something to the rest of the world. In exchange for keeping the experiment light, the results will only apply to your team. Over the course of an iteration or two, 4-6 weeks, you’ll only get enough data to start to see trends. In our case the results were not statistically significant using individual T-tests but that didn’t matter. The most important thing is that we had data that could be used for comparison, data that everyone felt good about and that helped us gain clarity into what we did and how well it worked.

Retrospectives get immediate value - The whole reason the experiment is light is to reduce cost and decrease the lag time to providing value to the team. Just to give you a little perspective, it took us 6 weeks to run the experiment and had enough data and casual observations to make a decision during the retrospective when the analyzed data was shared. That event occurred in early August of 2009. This experience report required almost nine full months of gestation from the paper proposal to the talk I gave at the conference. The gestation period on "universal truth" research can be even longer. We, as practitioners, don’t have to wait for those universal truths to be born to get value from research. By running your own quick and dirty, lightweight experiments, you can get results in a timely fashion that you know will apply to your team because your team was the subject of the experiment. It’s all about closing the gaps between research and practice and taking the information you need now instead of waiting for academic research to catch up.

Overall Conclusions


For the Square Root team it turned out that pair programming was faster, cheaper, and produced code that had more predictable albeit slightly worse quality. The more important lesson is that we discovered a technique, lightweight experimentation, for learning other interesting things about our team and about software engineering in general.

My paper and this blog post were all about trying to describe the technique, using our experiment as an example. I think it would be awesome if teams around the world conducted lightweight experiments on a variety of topics. If enough folks share what they learn, we might start to see trends emerge across teams that could lead to universal truths, validate research, or at least discover some great rules of thumb.

What else might make for a great experiment? Anything you’ve got a question about on your team!
  • What is the clearer way to write requirements, user stories or use cases?
  • Which estimation technique is more accurate of X and Y?
  • Can we skip unit testing if we use inspection (looking at quality, knowledge sharing)?
  • Is UML a better design notation than the one we made up as a team?
  • What else...?
If you do a lightweight experiment, let me know! Share what you learn as a blog post or whitepaper. Let others know what you’ve learned! Even if the specific results only apply to your team and the way you’ve executed your project, your experiences help form a baseline, a sort of shared understanding for how software development works, how some of these practices work. And there’s so much about software engineering that we have yet to learn.

Acknowledgements


This paper was my first experience report and it was an awesome journey. Naturally a lot of folks helped me along the way and I would like to take a moment to make sure they know that I appreciate their influences and support. The Square Root team: Marco Len, Yi-Ru Liao, Abin Shahab, and especially my fellow experiment co-champion Sneader Sequeira for having the guts to go along with this idea in the first place. Some of the faculty at Carnegie Mellon: Dave Root and John Robert (my studio mentors) for bringing up the idea of writing a paper, and Jonathan Aldrich for helping review my proposal. Artem Marchenko was my XP2010 paper shepherd after the proposal was accepted, and the quality of each draft only improved because of his input. A group of my fellow employees at Net Health Systems sat through an early draft of the presentation I gave and shared valuable feedback for improving it. And finally I thank, Marie, my wife, who was with me from start to finish and read more drafts and sat through more practice talks than anyone else. She’s probably as much an expert on this subject by now as I.

A Final Aside


I wrote the initial draft of this paper as my final reflection paper for my Master of Software Engineering degree (pdf). That draft has a very different tone, approach, conclusion, and direction than what I eventually published for XP2010. This is half due to there not being a hard page limit but also I had a lot more time to think about what was really important when writing for XP2010. There’s some interesting information, mostly in the lessons learned, that might prove interesting to those who are interested. You should check out my Square Root teammates' reflection papers as well since they are all interesting and well written.

Comments

Popular posts from this blog

Dealing with Constraints in Software Architecture Design

Managing Multiple Ruby Versions with uru on Windows

Agile Software Architecture (with George Fairbanks)