Monday, November 26, 2012

When Bugs Have Positive Business Value

There's a theory I'm working on analyzing in my current project. For customer value experience in total, it may be good to not find bugs but let customers find them for us.

As anyone reading my posting should know, I'm a tester. I believe in finding and fixing relevant problems before release. I believe we shouldn't steal the user's time with the annoyances and unworking features, when they actually buy the software to do something with it they should actually  get to do what they expect.

My current project is making me question my beliefs.

First of all, if we don't put so much effort into testing, we are able to produce more in quantities. The quality that we produce isn't great, it doesn't live up to my expectations and perhaps not other people's expectations. But it isn't that bad either. You can usually do things with the software. It doesn't stop you right at the footstep, without letting you see in. There's scenarios that fail, but often there are also optional scenarios to the same thing that don't fail.

Second, when customers find bugs and report them to us, we are quick to fix them. This is the core of my theory: there's actually a lot of value for the customer-relationship on these little shared successes. Customer finds an issue, annoying enough that they call us. We take it seriously, and fix it quickly. Customer, comparing this experience with some others where the issue is logged and it may come some time in some hotfix delivered in 6 months, gets really happy with the fix delivery. And as end result, the customer relationship is stronger, and it may even be that the second call back to the customer telling the fix is available includes also expanding the use of features for another project / feature area - sales happening.  

So far I've realized this approach is vulnerable, and it's really still only a play in my mind:
  • If we get too many fixes in short timeframe, we wouldn't be able to keep up with the quick deliveries of fixes - but our quality, as limited as it may be, has not gone to this level yet.
  • If the customer profile changes so that they'd end up contacting us on different issues on the same days, this would also ruin our ability to react quickly.
  • If the software delivery mechanisms changes so that the servers are not quick-and-easy to update, that again would destroy it.
  • If the development team members change, it will eat away the quickness of fixing, as there's more analysis and learning to be done to do a successful fix.
I'm thinking right now, that the work I do as the teams's tester, might actually currently decrease the value for the customer. While features may work better, they may work better in ways the users did not find relevant. At least the testing I (and the team) do means that we deliver less features with the same amount of effort.

The bigger value in quality is about the work that the team must do. It's not much fun to actually fix issues come in later, having forgotten about the details of implementation by that time. It's not fun that you can't make (and stick) to a plan for more than half-a-week, because you always need to be alert to implement the quick fixes. The bug-time is away from implementing new features.

Quite an equation we have here. After this quick note of it, I need to actually spend time breaking it down to something I can investigate for this particular context.


  1. Hi,

    thank you for the interesting article. I think the main factor in the equation you are mentioning is how quickly you release new versions and bug fixes for your product. The more often you release, the easier it is to find the cause of a bug. Moreover, bugs that are only there for a short time don't affect your users too much. On the other hand, if your release cycles were several months long, then it would be more difficult to find the bug causes, as well as your users being affected by the broken functionality for a longer time.

    I recently wrote an article about the costs of a bug: In this article, I point out what the costs of having and fixing a bug are, and how these costs increase the longer a bug is in your software. If you read that article with your (enviably) small cycle time in mind, it should shed some light on why it doesn't seem to affect you too badly when your users find some of your bugs.

    Best regards and thanks again,
    Co-founder & lead developer

  2. Michael, thanks for your comment. In this particular case, it is really not about how long the bug is in the code. Even with our monthly release cycle, it usually takes longer to find the problems that customers complain about. So a typical scenario is where the bug has been there for months before this complaining & quick fix happen. The key to our environment, as far as I see it, seems to be that we're lucky to have issues that are usually quick to fix. That would not have been true to some other projects I've been involved in, or even the sister project at this company I'm also working on.

    1. Hi, thank you for your prompt reply! It seems I was wrong then. Why is it that the issues are usually quick to fix? How old, if I may ask, is the project?

    2. This product has been in production for a little less than 2 years, and developed for additional 2 years before that. And still keeps changing on a monthly cycle. It seems that the bugs are quick to fix because the developers haven't changed, and because of the type/nature of issues - relevant for users, but easy ones to fix. I think my timing sample has a lot to do with this - I've worked with the product since April, and the "tough ones" may have been dealt with ~2 years ago.

      I have two datapoints where this one-day-to-fix did not hold from the last month. One was because of responsibility-avoidance game and another because of a new remote developer introduced did not have the benefit the old developers have with having experienced and learned over entire product history.

  3. I see this as a downside of having testing take place within a silo that's too far removed from the actual customers and end-users. The more you know about the way real users are actually using the product the more value your testing will add.

    1. I disagree with your interpretation. The "customer" is internal and we meet daily. The end-users are many, and thus while I feel we know some, we don't know all of them. And while I can go and see exactly what pages they're loading, I can't see their intent and their feelings. I know most people complain only when we cross a mark of annoyance in our user groups.

      I see this as specific to what this product and team has: small team delivering web application as a service that resides on a server next to us.

  4. While yes it is a normal part of the development cycle, being entirely opaque about bug fixes should not be. My policy is to avoid it at all costs until I am forced to update either by management or by the end of life cycle until our business can implement a proper accounting software.

    You will need to prepare a sales agreement in order to sell your business officially.

    1. I'm not sure if I get this comment. The business is not about to be sold. This post is about value of not fixing bugs before end users see them. What do you mean by "opaque about bug fixes"? Did I imply opaqueness and why is opaqueness something to avoid at all costs?