Wednesday, May 20, 2009

Testing in Definition of Done

At a time we were getting started with Agile methods, a lot of energy went into working out the definition of done. We followed the debates on whether that is something the team decides or something the product owner decides, and went on with our share of discussions.

At first, it was not easy to even include testing in the definition of done. At least not all kinds of testing that were actually needed. Eventually that passed, and the lesson was learned: if it is not tested (and ready to be published), it is not actually done. The value is not available from concept to cash, as the lean thinking goes.

I still feel the definition of done, especially for the testing part, is quite a complex exercise. Testing is an endless task. At some point, however, it stops providing value, and should be deliberately stopped.

This is typical approach in "traditional testing" with a risk-based test management focus. Thus what I tried introducing is a practice of "risk-based test management for definition of done". Essentially this is a practice of discussing what "testing" in definition of done should be for each of the product backlog items through understanding the acceptable level of risk with that item.

"Testing" in the definition of done is not just one. Some changes can be quite safely tested mostly on unit level. Some changes can quite safely be tested with automation. Some changes need extensive exploratory testing.

Similarly "acceptable risk" is not the same for all product backlog items. Some items end up being very visible and commonly used features. Some items are for fewer users, but perhaps more important as customers. Some items are tick box features for sales purposes. You would look at acceptable risk very differently on each of these. Risk-avoidance through added testing adds costs. While velocity may remain similar (when the sizes are visible in the product backlog items), the value experience by users for the same velocity would not be.

2 comments:

  1. In addition what I have experienced in scrum is that people turn to think that tester do not study. In scrum tester is like a BA knowing the requirement an identifying scenario's yet SDLC Still important. This bring back the question of why do you test? as developer you cannot brake your own code and you cannot perform testing as a tester who have the experience of testing. All in all if you do not need a tester you do not need business analyst even the QA team. why not having developer only?Then you will see your support Item list growing. All in all you need independent tester.

    ReplyDelete
  2. I sincerely dislike the idea of building the wall between developers and testers by saying that developers can't break their own code, since I've witnessed the opposite on so many occasions.

    I'm an "independent" tester (except that I dislike the term) and use a lot of time to analyze how I test, how I think and feel when I test and how people react to whatever I do that I call test-related. I still refuse to take it as fact that developers can't find problems IF willing and able. And I strongly believe that I personally can bring a whole lot of value to any time I will be joining as "just a tester".

    I test to help the team find relevant information at times when the information is more valuable - on time, with the ability to focus on just the information providing role.

    ReplyDelete