Agile Software Development Team Empowerment: A Holistic View

Over the past six decades, software development has evolved from what was thought to be an engineering discipline complete with rigorous methods and processes to much more of a crowd-sourced social engagement involving skilled craftsman and iterative collaboration.  Executives and managers have had to adjust their thinking from a “command and control” style where every aspect of software development was tightly timed and controlled to a “servant leader” style where the goal is to empower the team to make decisions and remove obstacles to their success. I have written previously about these  “softer” aspects of software development. In this article, I will pull together the previous concepts into to a holistic view of Agile software development team empowerment. My intention is to create a framework which can lead to more productive teams that produce high-quality code stemming from of business-focus, trust, and collaboration.

Before I begin, there are a few assumptions I make as precursors for any software shop as follows:

Assumptions

Good Talent. Any high performing team needs to be staffed with talented staff. Most well-functioning agile teams will police their own ranks and either help to raise the performance of mediocre members or identify those that need extra help and support. This self-policing comes from a strong sense of team and focus on the development goals. If you don’t hire the best talent and then allow Agile team dynamics to work their magic, then you may be forced to resort to command and control means to reach your goals. Resorting to command and control methods with Agile teams usually produces the undesirable outcomes of demoralized teams and high turnover.

Best Intentions. Most software development staff, given half the chance, will strive to understand a align with the business goals. In cases where they don’t, it is usually traced to a failure of the culture to support employees’ line of sight to the business. We must assume from the start that both software development staff, management, and executives work from a common set of good intentions. It is when we attribute mistakes to bad intentions that software projects spiral out of control in a frenzy of finger-pointing.

Supporting SDLC and Tooling. Agile software development is adaptable to many industry scenarios. Teams need an established software development lifecycle which sets expectations for how Agile is applied and governed. Agile works equally well at any point along the spectrum from loosely to highly regulated industries. Teams also need appropriate tooling to support their use of Agile and these are abundant. Open source shops might use tools from Atlassian or HP while Microsoft shops might use Team Foundation Server. The tools are mostly fungible across target technology stacks, but it is important to use a toolset to enforce SDLC quality gates and to automate manual tasks and tracking.

With assumptions out of the way, let’s drill down on the steps toward the empowerment of software development teams.

Organizing the Teams for Success by Keeping Tensions Healthy

In the three-part series, “Designing an Accountable Software Development Organization,” I discuss the principles of organizing a software development shop for success. The very nature of software development includes the tension between project requirements, the ability to translate those requirements into high-quality working code, and the governance to ensure that it’s done in an orderly and timely fashion. It is impossible to avoid these tensions, but it is possible to design an organization that acknowledges and harnesses the healthy tension while avoiding the tensions from becoming destructive. Put another way, the tension between requirements, technical delivery, and governance are necessary to producing great software. It is entirely up to managers and executives as to whether the tensions will remain healthy or turn toxic. Software developers will flock to healthy shops and flee toxic ones.

Supporting Decision-making by Maintaining Line-of-sight to the Business

All too often, business keeps its software development staff in the dark about key business strategies, goals, successes, and failures. At the same time, we expect our software development staff to make key decisions about software requirements and their implementation to drive business success. To use an old 1950’s B science fiction movie quip, “This does not compute.” In the article, “The Technology Executive and the Software Craftsman,” I discuss treating software development staff less as vendors who simply produce a product and more as partners who share a common desire for favorable business outcomes. The goal is to create a line-of-sight from the business to software development to enable better decision-making at the closest point of impact. This turns software staff from simple doers to thinker/doers who constantly weigh the needs of the business in their decision-making process. This is best summarized by L. David Marquet in Greatness. Creating line of sight and empowering decision-making at the closest point of effect will not merely provide incremental gains in team effectiveness, but will increase effectiveness multi-fold.

Aligning Intentions and Setting Expectations Between Management and Software Development Teams

In addition to the disconnect between business goals and software development, there can also be a disconnect between the senior executive team and software development staff. In the face of this lack of understanding, software developers sometimes fail to understand how and why decisions are made. In the article, “The Technologist’s Guide to the C-Suite,” I discuss the various roles at the executive table as well as what each role is concerned about and listens for. My intention is to foster an understanding on the part of the software development staff so that they will be more effective at synthesizing requirements stemming from and evangelizing solutions to senior executives. The end result is to create better collaboration that ultimately engenders trust.

Identifying Risks, Removing Obstacles, and Continuously Improving

Once we have a well-organized and well-informed software development team, we move onto the nitty-gritty of developing software. As mentioned in the introduction, software development management is transitioning from a traditional “command and control” style of tight control to a “servant leadership” style of team empowerment and removal of obstacles. With a well-functioning Agile software development team, the obstacles to success are frequently external to the software development effort itself that manifest as inefficiencies in the development of software. The obstacles may be related to requirements instability, churn on architectural design, unmanaged tensions between groups, or any number of other issues. In the article, “Better, Faster, Cheaper: Picking Three,” I discuss a model for managing risks to the timely delivery of software in Agile projects. The purpose of the delivery risk model is to identify obstacles to team efficiency as early as possible them work to mitigate those risks. When the delivery risk model is applied judiciously, the end result is higher quality code, more timely delivery, and less time spent fixing defects later, i.e., better, faster, and cheaper. The delivery risk model is an essential part of any servant leader’s toolkit. Without it, risks accumulate and snowball precluding the opportunity to head off problem while they are manageable. Further, the delivery risk model is a gateway to continuous improvement efforts where teams learn from and correct mistakes rather than get punished for making them.

Before concluding, there are some management caveats follows:

Management Traps

Sprint Micromanagement. Agile software development is by definition a highly iterative and self-correcting process. This works when teams are permitted to make mistakes, be honest and transparent with themselves and their management about issues, and work to correct those mistakes. A “command and control” mindset often drives managers and executives to micromanage Agile teams at the sprint level by holding teams strictly to burn down goals and punishing teams when they do not meet those goals. This is a grave mistake since it drives Agile teams to artificially pad estimates, limit transparency, and hide mistakes. Rather, I recommend setting interim delivery goals spaced throughout the project and have executives and managers hold teams accountable for those deliverables. The team should then be allowed to experiment and adjust their own processes within the sprints to improve velocity and quality without fear of punishment. Obviously, the teams need to deliver the agreed-upon functionality for the interim deliverables and may need to push harder to do so. But if the team is permitted to control the progress of sprint, they will usually willingly find a way to meet the interim deliverables.

Agile Dogmatism from Above. Agile methods are a fluid and changing set of guidelines stemming from the original manifesto. Agile software development is designed to adapt to the business and technical challenges at hand. In a strange twist of irony, I have encountered managers and executives who take a dogmatic approach to Agile methods and strive to impose strict doctrine around process and procedure. I recommend starting with a well-defined Agile process then listening to the teams about what’s working and what’s not. Experimentation and adaptation are at the heart of Agile, and the team should be allowed to make adjustments within reason. Depriving the team of this opportunity not only stabs at the heart of the spirit of Agile methods but disempowers Agile teams to influence their own destiny.

Agile Dogmatism from Below. Agile practitioners can sometimes be dogmatic as well. One of the  more persistent narratives I’ve heard says that Agile is not about committing to or hitting deadlines. My response continues to be, “Any software development methodology that cannot deliver on a predictable timeline does no good service to the business.” Another chestnut asserts that Agile is about producing software rather than documentation when, in fact, Agile values working software over documentation but does not preclude the need to produce documentation. As from above, there is a certain irony about dogmatism when applied to Agile by its practitioners.

Conclusions

Developing truly great software has never been easy. Software is among the most complex things produced by humans. Its complexity stems from the vast number of states that a software product can assume. In short, the “soft” is software assumes complexity and flexibility. Getting software right is as much a social engagement as a technical one. There are vast tomes and training materials about getting the technology right but woefully little guidance on addressing its social aspects. In this article, I have endeavored to highlight the social aspects of software development and provide a framework for getting best from the skillful and well-intentioned professionals who develop it. In the end as with any social endeavor, we get better results when people are fully engaged in a trusting, collaborative environment. Great software happens where talented people and great software development culture intersect, and that’s what empowerment is all about.

Best,

Charlie

Better, Faster, Cheaper: Picking Three

The old chestnut in software development goes, “Do you want the software better, faster, or cheaper? Pick two!” This dilemma has increasingly plagued technology executives as the pace of technology change accelerates, and the business challenges the software delivery organization to “pick three.” Historically, software development organizations have balanced these apparently competing dimensions by assuming that such balance was a zero sum game where one dimension must be traded off for the other two. In this article, I will challenge the conventional wisdom and discuss a risk-based software development management model where we assess the risk to on-time delivery and mitigate those risks to keep the project on track. In doing so, we more efficiently use software development capacity ensuring that the optimal time is allocated to each software development activity to ensure that we deliver high-quality software. I call this the “Delivery Risk” model.

The Relationship Between Better, Faster, and Cheaper

As a technology executive responsible for software delivery, there are three primary responsibilities I have to the business that I serve. First, and because I work in the clinical trial software domain where patient safety and data integrity are paramount, I have the responsibility to deliver high-quality and defect-free software. Next, since our software is used to generate top-line revenue, I have a responsibility to the business to deliver our software within predictable timelines so that our production, support, and business development organizations can operationalize the software and plan their promotional and sales activities. Finally, since the software development organization is a cost center, I have the responsibility to exercise disciplined stewardship over the resources entrusted to me so that I don’t adversely affect margins beyond the costs predicted in the budget. In summary, the business holds me accountable to balance “better, faster, cheaper” and to strive to deliver on all three.

The primary reason for the “pick two” nature of the relationship between “better, faster, and cheaper” is because much software is delivered late and can be of questionable quality. When this happens, software development capacity is lost to subsequent late project starts and to capacity that must be spent fixing defects when it could be used developing valuable new features. These two impacts conspire to raise the cost of software development. Further, as a software development project becomes late; the slippage is frequently mitigated by limiting activities that that tend to occur late in the software development cycle like integration testing. Pinching quality-enhancing activities reduce the overall quality of the software. In short, the key to delivering software more cheaply is to keep projects continuously on track to ensure that all software development activities, especially those that improve quality, are not subject to short cuts.

In the Delivery Risk model, we seek to identify and monitor the events that conspire to make software projects late and mitigate their effects early enough so that they don’t adversely affect the estimated delivery date. The events to monitor are requirements scope, resource availability impacts, and delivery risks. There are three assumptions that serve as precursors to implementing the Delivery Risk model. First, we assume that the software teams can provide reliable software estimates. We provide time to work out the design before the team commits to a date. Second, we assume that an aggressive software quality assurance mechanism is in place. In our case, we use aggressive automated unit testing covering more than 95% of the code to ensure that defects introduced by new code are detected and corrected quickly. Third, we assume that the product managers appropriately manage project priorities. Once a project is underway, it must not be preempted to service higher priority development efforts. Project “context-switching” not only drains development capacity but it also adversely affects staff morale. Estimation, automated testing, and project prioritization merit discussions of their own and are beyond the scope of this article.

The Delivery Risk Model

The intent of the Delivery risk model is threefold. First, it identifies tactical impediments to software delivery that need attention. Second, it identifies trending potential impediments to software delivery that might need attention. Third, it identifies and classifies risks that need monitoring and mitigation. The overarching goal of the model is to identify and track risks while there is still time to react and before their effects become detrimental to the overall delivery of the project. In short, the Delivery Risk model seeks to give early warning if the train is running off the tracks!

The first component is the tactical assessment of risk across all projects slated for the proposed release. In this assessment, we track scope, capacity, and delivery status.

tactical

In the above sample, each project contains a RAG (red/amber/green) status for each status element as well as the estimated delivery date. Scope status refers to the requirements or use cases that make up the project. Failure to lock down scope is among the highest risks to timely completion. In cases where an external customer provides requirements, as is common with data integration projects, we often track scope more granularly to ensure that the customer has agreed to the requirements and the development teams thoroughly understand them. We are less granular when the subject matter expertise is internal to the company. Resource status refers to the project staff itself. Resource risks can arise from unexpected staff attrition, illness, or paid time off. The delivery status refers to the project burn down as we will see shortly.

The five sample projects listed above are managed as a proposed release that we often refer to as the “program.”. The program team meets weekly and uses the representation above to discuss the current status of the project.

The second component is an assessment of the overall trending of each project comprising the release. For each project, we are especially interested in the deviations from expected burn down of project use cases against estimates. In a typical burn down chart, we compare ideal to actual burn down within an iteration:

Burn_down_chart

In our case, we are interested in the deviations from expected burn down of project use cases against expectations. In short, we are more interested in the deviations around the ideal burn down rather than the burn down itself using the ideal burn down as the zero point on the x-axis and comparing against actual burn down on the y-axis. Further, we want to track and watermark the deviations across iterations to get a holistic view of how we are tracking against the target delivery date:

delivery

In the above example, each iteration is listed on the x-axis. The y-axis represents the deviation in use case burn down (velocity) from the expected team velocity needed to achieve the estimated delivery date. We translate this into a percentage increase or decrease from the expected delivery date. We then watermark the percentages to establish the RAG status. Finally, and to get a clearer view of the source of the risk, we break out the data by development role including Design, Development, Testing, and Product Management. We can do this because our SDLC contains process gates that allow us to track use case transitions through the process. There are some artifacts to note above. For example, if an activity has not started, it shows up as pegged to 100% overrun as in the case of Product Management approval of completed use cases. We know that such approvals are trailing indicators, and we are not typically alarmed by them unless they persist for long periods of time.

The delivery risk chart is probably the single most important tool in the model. It enables the team to see events trending over time. We have some simple rules that we apply to the trends. Green means that no action is needed. Amber means that we need to watch and plan for mitigation. Red means that we need to act on our mitigation plans. Using this tool helps us avoid a “happy path” mentality by showing us how we are trending well in advance of the delivery date. The team is constantly evaluating risks to delivery and considering mitigation strategies and helps us to avoid “fire drills” at the end of project since it spreads mitigation throughout the project.

The final component is the enumeration and assessment of potential risks. In our case, we maintain a risk register and review it weekly. When properly managed, the risk register contains those risks that the team needs to overcome to keep the project on track, rather than risks that caused the project to be late. Here is an example:

risk

In our example above, we classify each risk for its probability of occurrence and the impact should it occur. We then note the strategy for dealing with the risk should it become real. Again, these are reviewed and adjusted weekly.

Cautionary Notes

I need to caution the reader on the use of the Delivery Risk model since there are limits to its use.

The Delivery Risk model IS:

  • A means to visualize current delivery status and risks.
  • A means to visualize trending status to allow staff to react earlier.
  • A means to ensure that all development activities get appropriate capacity attention, especially testing.
  • A vehicle to identify process improvements as part of a holistic, continuous improvement discipline.

The Delivery Risk model IS NOT:

  • An exact science. It is the combination of art and science intended to identify risks and mitigation early. The model is deliberately organic and depends upon the combination of data, insight, and trust.
  • A team or project performance metric. It should never be used as part of formal goal planning for individuals or teams. Nor should it be used as a mechanism to punish or intimidate teams or individuals.
  • A magic bullet. It can help you limit project overruns but may not help you avoid them altogether.

Credits

Bob Ponticiello and Phil Gower first proposed the Delivery Risk model while working at Princeton Financial Systems. Bob and Phil managed a large globally distributed software development effort and noticed that software quality seemed lower the later the software was delivered. They took years of waterfall delivery data and created a risk factor spanning many years of software releases. The “risk factor” was expressed in days expended beyond expected waterfall milestones (requirements, design, code, test) and was cumulative over the life of a project. They graphed the risk factor against the post-release defect count:

correlation

Their data revealed a 0.6 correlation (see the trend line) between risk factor (lateness) and post-release defect count (quality). We tested whether the relationship was causal by deliberately managing delivery risk against key waterfall milestones in subsequent projects. The theory was that if we managed to waterfall milestones, we would ensure sufficient time was allocated to each software development activity, especially thorough software testing. After a very short time, we were able to demonstrate that managing the risk factor not only brought the software in on time but allowed us to deliver higher quality software. I borrowed Bob and Phil’s idea and adapted it to Agile development methods. I feel fortunate to have worked with two such insightful and committed software development leaders.

Final Thoughts

The Delivery Risk model is an aid to getting projects completed on time while mitigating the risks to quality. When it comes together, it helps makes the most efficient use of a software development organization that translates directly to cost efficiency. Further, when used as an aid to continuous improvement, it can tap into the teams’ common commitment to delivering great software while promoting team communication and building trust. I truly believe that when open and honest communication takes place within a software development organization, the business can get results better, faster, and cheaper and will never have to settle for just picking two!

Best,

Charlie