Choosing a Software Design Strategy
I was reading an article from the Joel on Software archives and was struck by this quote from The Project Aardvark Spec:
Both of these perspectives are correct. Both of these perspectives are also wrong. Since everyone loses when thinking in extremes and ultimatums, let's figure out what is really going on.
Building software falls into an interesting case of problems known as wicked problems. This means basically that there are many possible solutions and that no solution is really "right" or "wrong" only "better" or "worse" given your current understanding of the problem. Further each problem is essentially unique, there isn't an explicit stopping rule, and as the designer you will be held liable for consequences of your solution.
With this in mind, in thinking about software design there are two variables which will determine how you go about designing a solution.
The solution space is a multi-dimensional landscape filled with nearly limitless possibilities. As the designer it is your job to figure out how to navigate this space in search of a solution to a problem. Your current understanding of the problem limits your ability to see solutions and often the fitness of a solution can only be understood in reference to another possible solution. I imagine the solution space as a mountain range and my job as the designer is to find the "best" mountain for my client. Of course some mountains might block my view from others which means I may have to climb a peak just to see a better solution. This implies that my understanding of the problem dictates my understanding of potential solutions and that my understanding of the solution will yield further insights into the problem. This is one reason why it is often helpful to get working software in front of users quickly.
When designing something we have a natural tendency to prefer making rational design decisions where, after careful examination of all information we choose the best or optimized solution. To make a rational choice requires not only that we know everything about the problem domain but also everything about every possible solution to the problem. Both business constraints, by constraining time or money, and shortcomings in the human brain limit on our ability to seek this knowledge creating a boundary around which a solution must be found. And it is in this world of bounded rationality that we must find a solution for our customer's problem. Searching for a solution then is not as much a matter of optimization (finding the "absolute best") as it is a matter of satisficing, finding the best solution with the information currently available. To complicate matters further, an individual's understanding of the problem domain and solution space is relative to that individual's experience and capabilities, the size of the problem, and the complexity of the solution.
When designing software it's natural to want to move as far up the knowledge curve, get as close to making a rational, optimal design, as possible. This tendency stems from our instinctual preference to avoid loss [1]. In other words, designers assume that an optimal solution exists and so are predetermined to reject less than optimal solutions.
In the software design world there are four basic types of design strategies.
The design strategies appropriate for your team to choose is going to be determined by the amount of risk remaining in the bounded knowledge graph. Less risk implies that you anticipate less change (or at least that you feel you can deal with the change reasonably) and the more options you will have in choosing a design strategy. Choosing a design strategy then is a matter of deciding the degree to which your team would like to plan within the determined appropriate strategies. The amount of planning will depends on your team's preferences and experience, or can be driven by customers' business needs. From a theory perspective, the amount of planning appropriate to the project is directly proportional to how much you know about the problem domain and the solution space. In other words, the less you know about a problem and the fuzzier the solution seems to you, the less capable you will be in making planned designs. As designers we run into problems in our designs when we are wrong about what we think we know about the problem domain or solution space. Unfortunately as eternal optimists, software engineers get these things wrong often.
Don't feel comfortable with the design strategies appropriate for your project? Remember that the appropriateness of a design strategy is based on your current understanding of the solution space and problem domain. Therefore, to make more design strategies available to your team you must learn more about the solution space and problem domain. I find that examining the project risks is the best way to determine what you think you know about the problem domain and solution space. Understanding the engineering risks your project faces also serves as a blueprint for moving along the knowledge curve. For example, if your team feels that they don't know enough about the problem based on the identified and prioritized risks, you can increase your knowledge about the problem domain. On the same token, if the team feels there is a lot of risk concerning the solution space you can explore different designs, run experiments, or create prototypes for your users to try. Another option other than looking at risk is to use a design process such as Architecture Centric Design Methodology (ACDM). ACDM is a staged design process that encourages teams to explore the design space through experiments by addressing issues in proposed designs. ACDM strongly focuses on quality attributes and will nudge your team toward a planned design strategy but the process can be adjusted to use a different design strategy, if that is something your team strongly desires, by adjusting the go/no-go criteria between stages.
Remember too that you learn more about the problem and solution as you implement the system. So by the end of the project there will be no risk, you'll know everything about the system you can, and you'll have all the information necessary for a great (and practically useless) planned design.
Let's apply this information to Project Aardvark and figure out why Joel is such a strong supporter of BDUF at Fog Creek Software. Project Aardvark, like most other software systems developed at Fog Creek, is a developer tool. This means that the folks developing the system are also going to be at least one of the groups of end users. We can assume based on Fog Creek's hiring strategies that the developers are pretty smart. Further, Aardvark falls into a class of problems which has been well studied meaning there are examples available for study and some folks at Fog Creek may have even implemented similar (but not identical) systems or subsystems.
The implication is that the Project Aardvark problem domain is well understood and the solution space doesn't need much exploration because Joel doesn't perceive much risk in the solution. Since Joel has a firm grasp on the problem domain and since he feels confident about the solution space, he doesn't anticipate much change in the design throughout development. Based on the information Joel has available about the project relative to his knowledge and experience with the problem domain and solution space, a planned design strategy is appropriate for this project. Joel could use other approaches further down the application curve too, but since Joel has relevant experience with planned design, the design probably won't be that big (in fact it was less than 20 pages), and the Fog Creek team is used to following planned designs, planned design is probably a good fit for Project Aardvark.
One last note. Bounded rationality implies that the planned approach to design may not always be possible. There is a limit based on the size of the problem domain, solution space, and business constraints (especially time and resources available) that may prohibit you from effectively using Big Design Up Front. In other words, it may not always be possible to be high enough on the knowledge curve for a planned approach to make sense. Opponents of BDUF will tell you that most software falls into this category. In the case of Project Aardvark, the system was relatively small and Joel ensured that he had plenty of resources and time for exploring the solution space. It is critically important to understand where the boundaries that might prevent you from rational design are. Since we prefer rational optimization over decision making with less information, failing to recognize these boundaries will result in analysis paralysis.
By understanding some of the theory behind design decision making it's easier to know where in the spectrum of possible design strategies your project belongs. Examining a project's engineering risks is a relatively simple and repeatable way to determine where you are on the knowledge curve for a project. Now it's possible to move your team up the knowledge curve, as necessary, to reach a point where you feel comfortable with the risks your project faces. You move along this curve by addressing the project's risks through prototyping, research, experiments, evaluating proposed designs, and interacting with the customer. You could also use a specific design process such as ACDM. It is important to recognize that by moving up the knowledge curve you are necessarily moving your team closer to being able to use a planned design strategy. Your location on the knowledge curve will determine which design strategies you can use but you still need to determine which makes the most sense for your team and the project. Finally, these curves are relative and the scale is going to change from project to project and team to team. I think the principles behind the curves are more important than the curves themselves and you shouldn't take the curves literally as a mathematical function.
End Notes
[1] This also explains why humans behave the way we do in free market economies, gambling, and other similar situations. I am attempting to apply information I learned watching this TED Talk on monkey economics and irrational decision making by Laurie Santos.
I can't tell you how strongly I believe in Big Design Up Front, which the proponents of Extreme Programming consider anathema. I have consistently saved time and made better products by using BDUF and I'm proud to use it, no matter what the XP fanatics claim. They're just wrong on this point and I can't be any clearer than that.When thinking about design there are two extremes pivoting around how much up front design work needs to be done before you start writing code. Proponents of up front design point out, as Joel has in the quote above, that making changes in the design is a lot cheaper in a 20 page design document than a partially or fully implemented system. With a solid plan you stand a better chance of not painting yourself into a corner and avoid having to make expensive design changes in the code later. Opponents of BDUF worry greatly about analysis paralysis, which can severely delay a team's ability to provide working software to the client. Codified in this thinking are the assumptions that clients don't really know what they want until they see it and that only by getting working software in the hands of users can we understand whether a proposed solution is a good enough solution. To these folks, writing a design document is wasted effort because they don't know what they want and they believe to their core that things are going to change anyway.
Both of these perspectives are correct. Both of these perspectives are also wrong. Since everyone loses when thinking in extremes and ultimatums, let's figure out what is really going on.
Choosing a Software Design Approach: Theory
Building software falls into an interesting case of problems known as wicked problems. This means basically that there are many possible solutions and that no solution is really "right" or "wrong" only "better" or "worse" given your current understanding of the problem. Further each problem is essentially unique, there isn't an explicit stopping rule, and as the designer you will be held liable for consequences of your solution.
With this in mind, in thinking about software design there are two variables which will determine how you go about designing a solution.
- How well do you understand the problem domain?
- How well do you understand the solution space?
The solution space is a multi-dimensional landscape filled with nearly limitless possibilities. As the designer it is your job to figure out how to navigate this space in search of a solution to a problem. Your current understanding of the problem limits your ability to see solutions and often the fitness of a solution can only be understood in reference to another possible solution. I imagine the solution space as a mountain range and my job as the designer is to find the "best" mountain for my client. Of course some mountains might block my view from others which means I may have to climb a peak just to see a better solution. This implies that my understanding of the problem dictates my understanding of potential solutions and that my understanding of the solution will yield further insights into the problem. This is one reason why it is often helpful to get working software in front of users quickly.
When designing something we have a natural tendency to prefer making rational design decisions where, after careful examination of all information we choose the best or optimized solution. To make a rational choice requires not only that we know everything about the problem domain but also everything about every possible solution to the problem. Both business constraints, by constraining time or money, and shortcomings in the human brain limit on our ability to seek this knowledge creating a boundary around which a solution must be found. And it is in this world of bounded rationality that we must find a solution for our customer's problem. Searching for a solution then is not as much a matter of optimization (finding the "absolute best") as it is a matter of satisficing, finding the best solution with the information currently available. To complicate matters further, an individual's understanding of the problem domain and solution space is relative to that individual's experience and capabilities, the size of the problem, and the complexity of the solution.
When designing software it's natural to want to move as far up the knowledge curve, get as close to making a rational, optimal design, as possible. This tendency stems from our instinctual preference to avoid loss [1]. In other words, designers assume that an optimal solution exists and so are predetermined to reject less than optimal solutions.
Choosing a Software Design Approach: Application
In the software design world there are four basic types of design strategies.
- Planned Design: All design is completed before beginning implementation. Often referred to as Big Design Up Front by detractors and associated with waterfall lifecycles. The essential assumption is that you can fully design a system before beginning construction.
- Minimally Planned Design: Some design is completed before beginning implementation. Sometimes referred to as Little Design Up Front. In using this strategy you acknowledge that some change is likely to occur but still want to avoid painting yourself into the obvious corners.
- Evolutionary Design: The design of the system grows as the system is implemented , but growth is deliberate and controlled by experienced engineers and proven engineering practices such as reference designs and refactoring. At least some requirements are usually specified in the beginning but it is not expected to be exhaustive.
- Emergent Design: The design of the system is allowed to occur organically as the system is implemented without specific intentions.
The design strategies appropriate for your team to choose is going to be determined by the amount of risk remaining in the bounded knowledge graph. Less risk implies that you anticipate less change (or at least that you feel you can deal with the change reasonably) and the more options you will have in choosing a design strategy. Choosing a design strategy then is a matter of deciding the degree to which your team would like to plan within the determined appropriate strategies. The amount of planning will depends on your team's preferences and experience, or can be driven by customers' business needs. From a theory perspective, the amount of planning appropriate to the project is directly proportional to how much you know about the problem domain and the solution space. In other words, the less you know about a problem and the fuzzier the solution seems to you, the less capable you will be in making planned designs. As designers we run into problems in our designs when we are wrong about what we think we know about the problem domain or solution space. Unfortunately as eternal optimists, software engineers get these things wrong often.
Don't feel comfortable with the design strategies appropriate for your project? Remember that the appropriateness of a design strategy is based on your current understanding of the solution space and problem domain. Therefore, to make more design strategies available to your team you must learn more about the solution space and problem domain. I find that examining the project risks is the best way to determine what you think you know about the problem domain and solution space. Understanding the engineering risks your project faces also serves as a blueprint for moving along the knowledge curve. For example, if your team feels that they don't know enough about the problem based on the identified and prioritized risks, you can increase your knowledge about the problem domain. On the same token, if the team feels there is a lot of risk concerning the solution space you can explore different designs, run experiments, or create prototypes for your users to try. Another option other than looking at risk is to use a design process such as Architecture Centric Design Methodology (ACDM). ACDM is a staged design process that encourages teams to explore the design space through experiments by addressing issues in proposed designs. ACDM strongly focuses on quality attributes and will nudge your team toward a planned design strategy but the process can be adjusted to use a different design strategy, if that is something your team strongly desires, by adjusting the go/no-go criteria between stages.
Remember too that you learn more about the problem and solution as you implement the system. So by the end of the project there will be no risk, you'll know everything about the system you can, and you'll have all the information necessary for a great (and practically useless) planned design.
The Project Aardvark Design Approach
Let's apply this information to Project Aardvark and figure out why Joel is such a strong supporter of BDUF at Fog Creek Software. Project Aardvark, like most other software systems developed at Fog Creek, is a developer tool. This means that the folks developing the system are also going to be at least one of the groups of end users. We can assume based on Fog Creek's hiring strategies that the developers are pretty smart. Further, Aardvark falls into a class of problems which has been well studied meaning there are examples available for study and some folks at Fog Creek may have even implemented similar (but not identical) systems or subsystems.
The implication is that the Project Aardvark problem domain is well understood and the solution space doesn't need much exploration because Joel doesn't perceive much risk in the solution. Since Joel has a firm grasp on the problem domain and since he feels confident about the solution space, he doesn't anticipate much change in the design throughout development. Based on the information Joel has available about the project relative to his knowledge and experience with the problem domain and solution space, a planned design strategy is appropriate for this project. Joel could use other approaches further down the application curve too, but since Joel has relevant experience with planned design, the design probably won't be that big (in fact it was less than 20 pages), and the Fog Creek team is used to following planned designs, planned design is probably a good fit for Project Aardvark.
One last note. Bounded rationality implies that the planned approach to design may not always be possible. There is a limit based on the size of the problem domain, solution space, and business constraints (especially time and resources available) that may prohibit you from effectively using Big Design Up Front. In other words, it may not always be possible to be high enough on the knowledge curve for a planned approach to make sense. Opponents of BDUF will tell you that most software falls into this category. In the case of Project Aardvark, the system was relatively small and Joel ensured that he had plenty of resources and time for exploring the solution space. It is critically important to understand where the boundaries that might prevent you from rational design are. Since we prefer rational optimization over decision making with less information, failing to recognize these boundaries will result in analysis paralysis.
By understanding some of the theory behind design decision making it's easier to know where in the spectrum of possible design strategies your project belongs. Examining a project's engineering risks is a relatively simple and repeatable way to determine where you are on the knowledge curve for a project. Now it's possible to move your team up the knowledge curve, as necessary, to reach a point where you feel comfortable with the risks your project faces. You move along this curve by addressing the project's risks through prototyping, research, experiments, evaluating proposed designs, and interacting with the customer. You could also use a specific design process such as ACDM. It is important to recognize that by moving up the knowledge curve you are necessarily moving your team closer to being able to use a planned design strategy. Your location on the knowledge curve will determine which design strategies you can use but you still need to determine which makes the most sense for your team and the project. Finally, these curves are relative and the scale is going to change from project to project and team to team. I think the principles behind the curves are more important than the curves themselves and you shouldn't take the curves literally as a mathematical function.
Further Reading
- The Design of Design by Gordon Glegg - Primarly focuses on traditional design and manufacturing but the lessons are universal. A brief but excellent read.
- The Sciences of the Artificial by Herbert Simon - Explains the concepts of bounded rationality, satisficing, and using utility functions for design optimization. This is a very heady book but super interesting. I reccomend discussing the concepts with someone knowledgeable on this subject while reading the book. I had to read some chapters of this book multiple times before I even started to understand the implications of some of the concepts; Herbert Simon was just that ridiculously smart.
- The Design of Design: Essays from a Computer Scientist by Fred Brooks - See my review for more information on this great book.
- Just Enough Software Architecture: A Risk Driven Approach by George Fairbanks - Forthcoming book which presents the Risk Driven Design approach from a software architectue perspective. Includes an excellent overview of software architecture concepts. A must read for any software developer.
- Architecting Software Intensive Systems: A Practitioners Guide by Anthony Lattanze - Presents the Architecture Centric Design Methodology (ACDM) and lays a strong foundation for the concepts and principles necessary for planned software design through software architecture.
End Notes
[1] This also explains why humans behave the way we do in free market economies, gambling, and other similar situations. I am attempting to apply information I learned watching this TED Talk on monkey economics and irrational decision making by Laurie Santos.
Comments
Post a Comment