Want Predictability? Avoid Quality Debt.

What is Quality Debt?
A quick search of the internet for “software quality debt” yields a handful of results with differing ideas of what “Quality Debt” is. Where “Technical Debt” is a well understood term in the agile community there is not a clear consensus on what “Quality Debt” is.

Let’s use this definition:

Quality Debt is a measure of the effort needed to fix the defects existent in a software product at any given point in time.

The Wikipedia definition for “software bug” is a fine starting point for use for what we mean by a software defect:

… an error, flaw, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways.

However defects are more than bugs. Defects also include missing functionality. An ATM that doesn’t give me the option of printing a receipt for my transaction is exhibiting a defect (but not a “bug” by the above definition). Performance issues and UX inconsistencies also contribute to quality debt. Quality debut is customer facing. Anything that the customer values that is broken, missing, hard to use, or too slow, is a defect adds to our Quality.

Not the Same as Technical Debt
While you may think of quality debt as an aspect of technical debt, it is more helpful to think of them separately. Both are bad and both become more expensive over time but they are detected differently. More importantly, technical debt will not directly impact users as quality debt can.
Technical debt is a measure of the quality of the design and the code, which is the internal quality of the software. Quality debt is a measure of the external quality of the code, the things that the user sees and experiences. A user never (directly) sees technical debt.
A program could be completely quality debt free and have a huge technical debt. It could correctly implement all the required and expected functionality and run flawlessly. Yet its technical debt could be enormous, exhibiting every poor software design and implementation you can imagine. On the other hand, the best designed, most sublimely elegant code could still produce wrong results or be missing functionality.

How do we Incur Quality Debt?
Let’s take as axiomatic that whenever we develop software (or do just about anything!) we will introduce defects in our work. Some are glaring, easily detected and fixed, but others may lurk in the shadows, waiting for us to ferret them out.
Those defects, sitting undetected in our code, are a major source of quality debt. Add to that detected defects that we know are there and we have prioritized for future resolution. Depending on our testing practices, the known quality debt may only be the tip of a proverbial iceberg of yet undiscovered bugs and problems.

Quality Debt

Known quality debt is only the tip of the iceberg.

We incur quality debt when we find defects and allow them to persist, or worse, when we put off testing and continue development without understanding how much debt we are taking on.

When Quality Debt is Ignored
Quality Debt is much like a financial debt: the older it gets the harder it is pay down. In the worst case a project puts off testing until the development is done.It is well established that the longer a defect ages the harder it is to fix. If many defects persist (either known or unknown) the effect is exacerbated as the defects mask each other, and fixes involve the same code.
Not testing until development is done is like using your credit card every month and not bothering to open the bill until the end of the year. You really have no idea how bad the problem is and how long it will take you to clean up the mess.

Unfortunately, some teams begin the shift to agile but retain this waterfall approach to testing that allows Quality Debt to mount up, undetected and unknown. Teams that take this approach find themselves with long, unpredictable “testing tails” on their project. They may have budgeted for a test/fix cycle or two, but those estimates almost always turn out to be optimistic.
Finally, if we are incurring technical debt along with our quality debt we end up trying to patch up brittle, poorly structured, and hard to understand code.

Measuring Quality Debt
BigVisible has developed extensive sensing, measurement, and reporting tools and processes for quantifying and managing Technical Debt. These tools and practices are used in a range of scenarios, from rectifying impaired code assets over time, to helping assigning intellectual property valuations during due diligence.

To support similar aims, BigVisible is developing techniques and metrics for measuring Quality Debt. Many variables impact the size of the Quality Debt “iceberg” out of view, relative to the portion we can see. Several factors influence the rate at which defects are being injected:

  • Team member’s experience and expertise
  • Team stability
  • Technical debt that is allowed to persist
  • Domain complexity
  • Novelty of the work, and so on.

The proportion of discovered to undiscovered defects is influenced by:

  • The length of time the code has been in production
  • The size of the code base
  • The technical practices of the Developers and QA personnel (e.g. pair programming, test-first programming, rigorous use of BDD, automated governance, frequent check-ins.)
  • The extent of automated test coverage and frequency of test execution
  • The maturity of the QA strategy as demonstrated by the number and type of automated tests maintained (unit, functional, component, system, performance and scale, etc.)
  • The robustness and frequency of execution of a Continuous Delivery pipeline

Living Quality Debt-Free
- The path to living quality debt-free is pretty clear and based on proven Agile practices.

- Definition of Done. We must agree that a story isn’t done until we know that it is implemented correctly. The only way to know that for sure is by testing. Test Driven Development is an excellent disciple that supports this need.

- BDD / Automated Acceptance Testing. To avoid quality debt we must ensure that implementation adheres to business and customer fitness to use as defined in acceptance criteria. This also provides regression testing scaffolding to ensure that previously implemented functionality is not impaired.

- Continuous Integration. Developers should be checking in small code changes very frequently, at least daily. The code base should be built and smoke tested at least daily. If the build breaks no new work goes in until the build is fixed. Continuous Delivery capability (not necessarily releasing to production) exercises the whole stack of automated tests, configuration and deployment scripts, etc to ensure every stage in the code lifecycle is known-state and in control.

- Automated Testing. As part of a nightly (or more often) build cycle run a regression suite of unit and acceptance tests. Make sure that today’s changes work and didn’t break anything else.

- Don’t tolerate “broken windows”. Deferring a bug fix instills the mindset that debt is OK.

By ensuring that we always know the quality of our product and don’t allow known defects to persist, we can avoid the trap of quality debt and the long testing tail that can destroy release predictability.

Want additional actionable tidbits that can help you improve your agile practices? Sign up for our weekly ‘Agile Eats’ email, with “bite-sized” tips and techniques from our coaches…they’re too good not to share.