Thursday, April 19, 2018

Where Are Your Bug Reports?

Yesterday, I put together a group of nine testers to do some mob testing. We had a great time, came out a shared understanding, people's weak ideas amplified and people standing together in knowing how we feel about the functionality and quality, and value for users.

This morning, I had a project management ping: "There are no bug reports in Jira, did you test yesterday?".

Later today, another manager reminds over email: "Please, report any identified bugs in Jira or give a list to N.N. It's very hard to react/improve on 'loads of bugs' statement without concrete examples.". They then went on informing me that when the other testers run a testing dojo, they did that on a Jira ticket and reported everything as subtasks and hinted I might be worried about duplicates.

I can't help but smile on the ideas this brings out. I'm not worried about duplicates. I'm worried about noise in the numbers. I have three important messages, and writing 300 bugs to make that point is a lot of work and useless noise. This is not a service I provide.

Instead I offered to work temporarily as system tester for the system in question with two conditions:
  1. I will not write a single bug report in Jira, but get the issues fixed in collaboration with developers across the teams. 
  2. Every single project member pair tests with me for an hour with focus on their changes in the system context. 
Jury is still out on my conditions. I could help, but I can't help within the system that creates so much waste.  I need a system that improves the impact of the feedback I have to give through deep exploratory testing, focused on value.

I'd rather be anything but a mindless drone logging issues in Jira. How about you?


  1. I recall a statement along the lines of "Individuals and interactions over processes and tools" being indicative of where value is truly derived. I think leaning in to help is always the right thing to do.

  2. I prefer your approach. Pair or mob with developers and fix bugs as we go. Or just go show a developer or a pair an issue I just found. In all cases, the devs will write a test to repro the problem and then the code to make the test pass - that should be plenty of documentation.

    But my team is not always comfortable with this approach. My last team liked working this way for problems found during development, but they wanted production bugs logged in a defect tracking system. Some production bugs were very complicated and they wanted to retain the information about how the debugged and fixed them.

    My current team's product is an online project tracking tool, so we're pretty much stuck with having to write bug stories, but we still discuss and work on them face to face!

  3. I like your approach and have experience it. I have got positive feedback from developers, testers and definitely smaller 'paper' work.
    But from the other side, when management ask me, please show me what you've done in measurable away (and without deep exploring of testing values), it is difficult to show results with some measurable metrics. In our case pair testing is the only one activity, but during development phase we have also others activities that bring value and improve product quality. How to show that pair testing, working side by side with Devs gives measurable values to testing (what metrics could be used?)? For now i do not have clear answer on this question to myself.

    1. I’ve had some success encouragingdevelopers to share their appreciation. You can count those!