Tracking Bugs Better
Most software processes are light in two areas: quality assurance and process improvement. Most processes prescribe specific techniques for ensuring the production of quality code. XP for example advocates unit testing with TDD, continuous integration with smoke tests, pair programming, and acceptance tests written by the customer (using something like FIT so you can run automated regressions). Process improvement in XP is accomplished in a 'round the campfire, Kumbaya singing, get in touch with your feelings brainstorming session. Assumptions abound and there is no systematic way of ensuring that either testing or process improvement is handled adequately. As we know, assumptions are never good enough.
The biggest assumption in XP (and indeed most software processes) is about bug tracking. Common sense dictates that you will create some kind of bug database. Hopefully it will at least be some kind of third party bug tracker such as Bugzilla or Bug Genie. Excel will work in a pinch but quickly becomes unsuitable for teams larger than one developer. But how does the bug tracking actually work? What bugs get reported? Will you record issues from inspections in your bug tracker or only "true" bugs? What is the process for fixing a bug? What is the process for closing a bug? Who has access to the bug tracker? What information is required in your bug database and what information is optional? How do you determine defect priorities? Or the severity of bugs?
The majority of software processes provide answers for almost none of these questions. You are largely on your own to make up whatever you think makes the most sense for your development environment based on the best practices for your software process and your understanding of "good" software quality assurance practices.
No matter what quality process you follow, you will need a defect control philosophy. Once again, in absence of guidance I turn to the Team Software Process, one of the few processes to define what it means to track defects. In the Team Software Process, defects are treated as blight, a horrific mistake injected through the ineptitude of a developer. To remove these blights, the TSP relies on a series of filters in the form of code reviews, code inspections, unit tests, function tests, and so on. XP has a similar, though somewhat less rigorous set of filters in the form of pair programming, TDD, continuous integration, and acceptance tests. Each filter is meant to remove more and more defects, until finally "all" defects are removed from the system by the time the software has passed through all the filters. Generally each filter is intended to remove different types of defects, though it is conceivable to capture escaped defects from a previous filter in a later filter.
Just as water passing through layers of sand and rock will remove debris, so too will code passing through layers of unit tests and inspections sift out injected defects.
Since many parts of a software process are dedicated to filtering out the defects we’ve injected, understanding how defects are injected is essential to preventing similar defects from being seen in the future. The idea is that we want to learn from our mistakes. To achieve this, record the type of defect and the reason it was injected. The TSP gives us a good starting point for each of these, shown in the tables below. You should feel free to modify the types and reasons so they make sense for you, your team, and your project.
The defect type characterizes what kind of defect is injected and captures the essence of what is needed to fix the defect.
The defect's reason characterizes why the defect was inject.
Assigning the reason can be a little tricky. I have found it is best if the person who injected the bug assigns the reason. On small teams (or if you’re following the PSP), this is generally pretty easy. It’s not about rubbing their nose in the problem - well, actually it is. OK, it’s not about embarrassing or punishing the person but creating an opportunity for learning from our mistakes. If you can understand the reason why a defect was injected, it’s possible to prevent the defect from occurring again. For example, if there seems to be a rash of education related defects in a particular module, perhaps some training is in order.
Ideally we’d also like to know when the defect was injected, as in at what phase of development. It is possible to realize this information with additional analysis but I’ve found the return to be rather diminutive. Basically, you’ll learn what we’ve known all along - that the longer a defect is in the system, the harder it is to get out and that the most expensive defects are injected during the earlier phases of development (e.g. design defects are costly). Rather than track when things were injected I think it makes more sense to track when defects are detected. The point is to gain an understanding of how well the quality process filters out defects. To accomplish this, simply write down what you were doing when you found the bug. If you’re using XP, the list might include designing, writing new code (in a pair), writing new code (alone), refactoring, unit testing, integration, and acceptance testing. With this information you should be able to determine how effective each activity filters defects and over time whether the quality process is having issues. For example, I would expect interface defects to be detected during integration. If they are being detected earlier, say during unit testing, or later, say during acceptance testing, then my continuous integration and smoke test suite might not be as robust as it should be.
The strategies I’ve outlined here are a little more sophisticated than your average bug tracker, but add a lot of punch for very little effort. Tracking defect type, reason injected, and phase detected allow you to get a better handle not only on how defects are being injected into the software, but also how they are being detected. Both these chunks of information are necessary for understanding how defects are making their way into the system and how your process is helping you ferret them out of your system.
The biggest assumption in XP (and indeed most software processes) is about bug tracking. Common sense dictates that you will create some kind of bug database. Hopefully it will at least be some kind of third party bug tracker such as Bugzilla or Bug Genie. Excel will work in a pinch but quickly becomes unsuitable for teams larger than one developer. But how does the bug tracking actually work? What bugs get reported? Will you record issues from inspections in your bug tracker or only "true" bugs? What is the process for fixing a bug? What is the process for closing a bug? Who has access to the bug tracker? What information is required in your bug database and what information is optional? How do you determine defect priorities? Or the severity of bugs?
The majority of software processes provide answers for almost none of these questions. You are largely on your own to make up whatever you think makes the most sense for your development environment based on the best practices for your software process and your understanding of "good" software quality assurance practices.
No matter what quality process you follow, you will need a defect control philosophy. Once again, in absence of guidance I turn to the Team Software Process, one of the few processes to define what it means to track defects. In the Team Software Process, defects are treated as blight, a horrific mistake injected through the ineptitude of a developer. To remove these blights, the TSP relies on a series of filters in the form of code reviews, code inspections, unit tests, function tests, and so on. XP has a similar, though somewhat less rigorous set of filters in the form of pair programming, TDD, continuous integration, and acceptance tests. Each filter is meant to remove more and more defects, until finally "all" defects are removed from the system by the time the software has passed through all the filters. Generally each filter is intended to remove different types of defects, though it is conceivable to capture escaped defects from a previous filter in a later filter.
Just as water passing through layers of sand and rock will remove debris, so too will code passing through layers of unit tests and inspections sift out injected defects.
Defect Data
With these ideas in mind, bug tracking has three basic goals.- Record defects so they can be analyzed and fixed.
- Identify the means by which defects are injected.
- Identify the means by which defects are removed.
Since many parts of a software process are dedicated to filtering out the defects we’ve injected, understanding how defects are injected is essential to preventing similar defects from being seen in the future. The idea is that we want to learn from our mistakes. To achieve this, record the type of defect and the reason it was injected. The TSP gives us a good starting point for each of these, shown in the tables below. You should feel free to modify the types and reasons so they make sense for you, your team, and your project.
The defect type characterizes what kind of defect is injected and captures the essence of what is needed to fix the defect.
Defect Type | Description |
---|---|
Documentation | Problems with documentation, documents, comments, or messages |
Syntax/Static | This usually is a compile error. These days, this is most applicable to dynamically interpreted languages such as JavaScript or Python since compilation is basically free. |
Build/Package | Errors due to incompatible versions or problems with packages (e.g. Java). |
Assignment | Incorrectly assigning a variable or method, for example an incorrect expression or object assignment, calling the wrong method, or missing an assignment or method call. |
Interface | These are design problems, for example class interface issues or function parameter issues (e.g. order, type, or missing parameters). |
Checking | Problems arising from incorrectly handling errors. For example, an if-statement or loop invariant does not work as expected. |
Data | Defects involving data representations within the software. |
Function | Algorithmic or functional defects, usually involves more than a few lines of code. |
System | Issues that result from outside the software, for example hardware timing issues or network problems. |
Environment | This is development environment can is used to categorize problems in the environment such as compilers, frameworks, or support systems. |
The defect's reason characterizes why the defect was inject.
Reason | Description |
---|---|
Education | You didn’t really know how to accomplish something. |
Communication | You were misinformed through either documentation or personal communications. |
Oversight | You forgot to do something that you knew needed to be done. |
Transcription | You understood what to do but you simply made a mistake. (The Personal Software Process advocates writing down code, reviewing it, and transcribing it to the computer before compiling. This is a bit of a throwback and I’m not sure that it really makes sense these days. You might interpret this more loosely to be problems in translation from architecture or design to implementation). |
Process | The process you are using led you astray by encouraging you to make a mistake. |
Assigning the reason can be a little tricky. I have found it is best if the person who injected the bug assigns the reason. On small teams (or if you’re following the PSP), this is generally pretty easy. It’s not about rubbing their nose in the problem - well, actually it is. OK, it’s not about embarrassing or punishing the person but creating an opportunity for learning from our mistakes. If you can understand the reason why a defect was injected, it’s possible to prevent the defect from occurring again. For example, if there seems to be a rash of education related defects in a particular module, perhaps some training is in order.
Ideally we’d also like to know when the defect was injected, as in at what phase of development. It is possible to realize this information with additional analysis but I’ve found the return to be rather diminutive. Basically, you’ll learn what we’ve known all along - that the longer a defect is in the system, the harder it is to get out and that the most expensive defects are injected during the earlier phases of development (e.g. design defects are costly). Rather than track when things were injected I think it makes more sense to track when defects are detected. The point is to gain an understanding of how well the quality process filters out defects. To accomplish this, simply write down what you were doing when you found the bug. If you’re using XP, the list might include designing, writing new code (in a pair), writing new code (alone), refactoring, unit testing, integration, and acceptance testing. With this information you should be able to determine how effective each activity filters defects and over time whether the quality process is having issues. For example, I would expect interface defects to be detected during integration. If they are being detected earlier, say during unit testing, or later, say during acceptance testing, then my continuous integration and smoke test suite might not be as robust as it should be.
Better Bug Tracking
The strategies I’ve outlined here are a little more sophisticated than your average bug tracker, but add a lot of punch for very little effort. Tracking defect type, reason injected, and phase detected allow you to get a better handle not only on how defects are being injected into the software, but also how they are being detected. Both these chunks of information are necessary for understanding how defects are making their way into the system and how your process is helping you ferret them out of your system.
Comments
Post a Comment