Previous Section Next Section

Agile Planning Principles

In this section, we cover some basic principles of agile planning and describe how these principles are applied to ICONIX Process. These principles are summed up in a top 10 list at the end of this chapter.

Three Types of Plan

When you’re planning an Agile ICONIX project, three levels of planning and estimating suggest themselves. The first of these, the initial scoping plan, is basically about getting an overall feel of where your project is going (this is easy when undertaking a car journey, but it’s somewhat more difficult when developing software). In an ICONIX context, it will involve listing requirements and identifying—but not elaborating on—potential use cases and coming up with some broad estimates for these. Chapter 5.

The second level of planning, the release plan, is about prioritizing and scoping. The customer, following the principle the customer dictates priority, is responsible for determining what is going to be implemented and in what order with the project team, and allocates use cases to releases according to business priority and available time and effort. See later in this chapter for the mapplet project release plan.

At this point in the planning process, it’s important to realize that many projects are doomed to failure by the imposition of completely unrealistic timescales and deadlines. There is always pressure to get software written quickly—that’s life—but many projects are thrown into an immediate state of panic from which they never recover. When the pressure mounts, and the deadline is fixed, apply the planning principle negotiate with scope. We’ll discuss issues surrounding the ideal duration of a release, and the degree to which we should undertake up-front analysis and design, later in this chapter.

The third and final level of planning is the detailed release plan. This is a detailed plan for the next release, and it will contain a list of tasks (based on how to implement the chosen set of use cases) and who is responsible for them. Adopting the principle the plan follows reality, later detailed release plans will take into account feedback on estimates, feedback from the customer, potential design improvements, new requirements, and bug fixes. Our colleague Chuck Suscheck, who has been involved in a number of multiple-iteration agile projects, offers some useful guidelines on how to plan releases in detail in the upcoming sidebar titled “Elastic Functionality in Iterations.”

Feedback Is Vital

Feedback is vital in mitigating the many potential risks that software developments face: risks of misunderstandings in functional requirements, risks of an architecture being fundamentally flawed, risks of an unacceptable UI, risks of analysis and design models being wrong, risks that the team doesn’t understand the chosen technology, risks that demanding nonfunctional requirements aren’t met, risks the system won’t integrate properly, and so on.

To reduce risk, we must get feedback—and as early as possible. The way we get feedback is to create a working version of the system at regular intervals—per release in terms of the earlier planning discussion. By doing this, we ensure we know what risks the project is really facing and are able to take appropriate mitigating action. At the same time, incremental development forces the team through the full project life cycle at regular intervals, thus offering the team the opportunity to learn from its mistakes before it’s too late.

Three Types of Release

At this point, you may be beginning to wonder about the overheads associated with all these software releases. It’s important to understand that all releases are not full production releases. Following the principle of three types of release, a release increment may culminate in

  • An internal release, seen only by the development team, which is used to mitigate team- and technology-related risks

  • A customer-visible investigative release (a prototype in some circumstances) given to the customer to play with, which is used to mitigate usability- and functionality-related risks

  • A customer-visible production release, which is intended to be used “in anger”

While a production release may have higher overheads associated with it (full system and acceptance testing, production of user manuals and user training, etc.), the overheads associated with other types of release can be kept to a minimum. It’s not that there isn’t some cost involved—as you’ll see later—it’s just that we consider this cost to be acceptable given the benefits.

Rotting Software

More traditional (nonagile) software development life cycles generally end with a large and somewhat undefined phase called “maintenance” (which developers often strive to avoid like the plague). Maintenance projects tend to focus on two types of work: fixing bugs and generating new requirements. These bear not a passing resemblance to some of the activities that will be identified during detailed release planning.

One of the major problems associated with software maintenance is design decay. Software design is fundamentally an optimization problem based on a defined set of functional and nonfunctional requirements. During design, we try to come up with the software structure that best implements these, making a multitude of trade-offs along the way. The “best” design solution is dependent on the requirements we’re trying to implement. If the requirements change, the best solution also changes. During maintenance, slowly but surely, software rot often starts to set in, until eventually the software ends up so brittle that it becomes impossible to change without major additional costs being incurred.

Coming back to Agile ICONIX development, new requirements may have to be dealt with on a per-release basis. These may have been identified in the initial scoping plan, but deferred for later consideration due to their instability or low business priority, or perhaps they’ve only just been thought of. By adopting an iterative approach, we’re running the risk of software rot setting in at a far earlier stage, unless we adopt some practices to stop this from happening.

We stop software rot by following the principle plan to refactor when necessary. Refactoring is a development technique involving changing the structure of a software system without changing its functionality.[9.] We use refactoring to minimize software rot as new requirements come up.

  1. Get a good understanding of the new requirement and the existing design.

  2. Analyze the design and work out what changes need to be made to accommodate both the old and the new requirements.

  3. Without adding the new functionality, implement the design changes and undertake (hopefully automated) regression testing to make sure the system hasn’t been inadvertently broken.

  4. Implement and test the new requirement.

Things get a little more hairy when populated databases are involved, and while a detailed discussion of these issues is beyond the scope of this chapter, this web page contains some help on the matter: www.agiledata.org/essays/databaseRefactoring.html.

No Free Lunches

Although this topic isn’t discussed much by the more vociferous proponents of agile software development, it’s perhaps apparent from our discussion of the need for refactoring that agile development comes with some costs of its own.

Some years ago, Barry Boehm published a famous paper in which he demonstrated that the cost of software change increased by an order of magnitude during each stage of the development life cycle (requirements, analysis, design, integration, testing, and live implementation). Proponents of methodologies such as XP claim that this “cost curve” has been flattened over the years by improved software development techniques. Although it is true to some degree that good design practices can and should be used to reduce the cost of change, there is very clearly a cost associated with refactoring: if we’d designed in the functionality up front, we wouldn’t have to refactor our code at all!

Having said that, some refactoring is almost always necessary. Even if you did all analysis and design up front, you’d still need to refactor as the inevitable change requests appeared, and refactoring as a technique clearly mitigates the risk of accepting change—something we try to do during Agile ICONIX developments. So we’re faced with a trade-off expressed in the planning principle trade-off the costs and benefits of incremental development.

All Changes Are Not Equal

Following on from this, it’s all the more important to understand that some changes will have a high cost associated with them. These changes are most likely related to implementing nonfunctional requirements, such as the need for concurrent multiuser access, security, auditing, and so forth, late in the day.

The costs associated with such changes are high because they cut right across the software, unlike pure functional changes that are likely to be localized. Changing a flat-file, single-user system into a multiuser relational database system is no trivial task, and doing so will affect the majority of components in the system. The more of a system that has been written, the greater the cost of these changes will be, which leads us to our next principle: consider high-impact design decisions during early iterations. Note that as the customer dictates priority, you will likely need to discuss trade-offs with him. And remember, try to get it right the first time. Don’t just assume that because you can refactor later you don’t have to try to get it right early in the project.

Does Size Matter?

During the earlier discussion, we mentioned that the ideal release size is still the subject of some debate. Agile proponents suggest release durations of a couple of weeks to a couple of months, with a preference for the shorter timescale. Closely related to this are the issues of just how much design we should do up-front and how to balance the cost and benefits of incremental development.

To blur the picture further, we could undertake, say, a month of pure up-front analysis and design while simultaneously doing some architectural investigation, and deliver our working software in two weekly releases thereafter. But the questions still remain: just how much up-front design should we do, and how long should our release increments be?

There are, unfortunately, no simple, formulaic answers to these questions. We have to consider the particular risks a project faces, the stability and business importance of a given set of requirements, external constraints like the ideal time to market, and so on.

However, pulling together some of the issues discussed in this chapter, we recommend the following process:

  1. Undertake a project risk analysis, and ensure that early releases are targeted at getting feedback on these risks, noting that these releases are likely to culminate in either internal or investigative releases of the system. Determine which risks you really need to mitigate using incremental delivery and which you’re prepared to “take a punt” on.

  2. Ensure you have a good understanding of the high-impact decisions your project faces, and try to schedule work on these aspects of the system during the early releases.

  3. Consider, with your customer, which requirements are the most stable and business critical, and try to schedule the first production release to accommodate these. Try to avoid dealing with unstable requirements early on.

  4. Try, in parallel with the activities in steps 1 and 2, to do enough up-front analysis and design of stable requirements to reduce the cost of refactoring to a minimum. If this starts to look like it’s going to take too long (say, over a month), schedule a number of analysis and design activities as the project progresses.

  5. Try to keep the time to each production release to a minimum, and if possible undertake a production release at least every 3 months.

  6. Try to keep release duration as short as possible (2–6 weeks), but not so short as to deliver nothing of concrete value. (Note that not every release increment has to culminate in a production release.) Where possible, keep consistent release duration to establish a project rhythm.

So on a low-technology-risk, single-user Windows application with about ten use cases, two experienced developers, and a customer who is bright on ideas but bad on detail, we might do a week or so of up-front analysis and design, followed by 2-week releases each delivering, say, two or three use cases’ worth of functionality for the customer to review, with a full production release every couple of months.

On a higher-technology-risk, large project, with a dedicated and capable customer, a stable set of requirements, and a team of bright but inexperienced developers, we might undertake two technology verification releases of about 2 weeks. In these releases, we’d also round-trip the whole development process, delivering some functionality at the end of each to make sure the team understands just how its analysis feeds into the design and then into the code. As we became confident that the team members understood this, we could undertake some larger chunks (say 1 month) of analysis and design to reduce the need for refactoring, then deliver the system in functional chunks every month, targeting a production release for every 3 months.

Tip 

Always finish a release increment with a formal review to see what was implemented and how the last increment could have been improved. The postmortem aspect of projects is often missed because of time constraints, but without a review (even a short one), it is awfully hard to improve your process.

[9.]Martin Fowler, Refactoring: Improving the Design of Existing Code (New York: Addison-Wesley, 1999).

Previous Section Next Section