At a time we were getting started with Agile methods, a lot of energy went into working out the definition of done. We followed the debates on whether that is something the team decides or something the product owner decides, and went on with our share of discussions.
At first, it was not easy to even include testing in the definition of done. At least not all kinds of testing that were actually needed. Eventually that passed, and the lesson was learned: if it is not tested (and ready to be published), it is not actually done. The value is not available from concept to cash, as the lean thinking goes.
I still feel the definition of done, especially for the testing part, is quite a complex exercise. Testing is an endless task. At some point, however, it stops providing value, and should be deliberately stopped.
This is typical approach in "traditional testing" with a risk-based test management focus. Thus what I tried introducing is a practice of "risk-based test management for definition of done". Essentially this is a practice of discussing what "testing" in definition of done should be for each of the product backlog items through understanding the acceptable level of risk with that item.
"Testing" in the definition of done is not just one. Some changes can be quite safely tested mostly on unit level. Some changes can quite safely be tested with automation. Some changes need extensive exploratory testing.
Similarly "acceptable risk" is not the same for all product backlog items. Some items end up being very visible and commonly used features. Some items are for fewer users, but perhaps more important as customers. Some items are tick box features for sales purposes. You would look at acceptable risk very differently on each of these. Risk-avoidance through added testing adds costs. While velocity may remain similar (when the sizes are visible in the product backlog items), the value experience by users for the same velocity would not be.
This blog is about thinking of things past, present and future in testing. As much as I'd like to see clearly, my crystal ball is quite dim. Learning is essential and this is my tool for that. A sister blog in Finnish: http://testauskirja.blogspot.com
Wednesday, May 20, 2009
Friday, May 8, 2009
Role of a tester in scrum environment
I just read an email sent on the scrumdevelopment yahoogroups list. The email mentioned a small company having given a notice of redundancy for - apparently all of - their testers. The developers had been told to do the testing, and the testers have some days to justify why they should be kept. Interesting dilemma.
Some weeks back, there was a session to learn about agile testing with James Lyndsay with the Finnish Association of Software Testing. There were some 15 people there, and at the end of the paper plane building -session, we identified key learning points. What stroke me specifically is a learning point that got the lowest number of scores - we agreed on it least based on the voting: "You don't need testers, you just need testing". Sounds a bell with the idea of notice of redundancy. Yet people around - testers specifically - did not agree with that.
I too am a tester, and I was the one writing that particular learning point to go around, since I felt that was one of really key things I had learned. Yet as a tester that works in a scrum environment - at least used to - I quite strongly feel testers are useful.
I believe this is related to a theme I just blogged about in Finnish. There's huge differences between experienced testers. There's people who have 5 years of experience and people who have 5 times a year of experience. Those testers who actively learn while testing and about testing, tend to be way beyond in useful experience over those who have learned their testing by following test scripts that they may have created themselves and checklists that keep them in discipline since they can't find motivation to be disciplined just from the importance of results they could be providing. An experienced tester that is an experienced machine part can be replaced with automation or someone who could do the work for cheaper. This could be the developers - just to save the cost of teaching the same things to yet another person who would not provide value for the invested time - or someone from lower cost countries.
I believe that you don't need testers in scrum environment but you need testing. It is not straighforward that the team is able to include the testing as it should be included if there is no specialist in the topic. Then again, having someone called a tester, or even having someone who has been a tester comparing expected to the seen, does not mean that person could actually help include the right kind of understanding in the team.
In some cases removing testers makes things better for the team in my experience. It helps the other team members take responsibility over quality, it makes the team start automation they've too long postponed, it makes them stop building fences over their own component and work together with other developers. While it may make them seemingly slow at first, they may recover fast and become better.
In other cases removing testers makes things bad - when the team left to do the work is not willing nor capable to do the testing. Things get declared done too soon and problems going further in the chain may increase.
I find that the potential value of testers in scrum comes from the testers' potential of thinking and acting like a tester - providing information that was not yet known, on time, in a way that saves time overall.
Being called a tester does not make one a useful tester.
Some weeks back, there was a session to learn about agile testing with James Lyndsay with the Finnish Association of Software Testing. There were some 15 people there, and at the end of the paper plane building -session, we identified key learning points. What stroke me specifically is a learning point that got the lowest number of scores - we agreed on it least based on the voting: "You don't need testers, you just need testing". Sounds a bell with the idea of notice of redundancy. Yet people around - testers specifically - did not agree with that.
I too am a tester, and I was the one writing that particular learning point to go around, since I felt that was one of really key things I had learned. Yet as a tester that works in a scrum environment - at least used to - I quite strongly feel testers are useful.
I believe this is related to a theme I just blogged about in Finnish. There's huge differences between experienced testers. There's people who have 5 years of experience and people who have 5 times a year of experience. Those testers who actively learn while testing and about testing, tend to be way beyond in useful experience over those who have learned their testing by following test scripts that they may have created themselves and checklists that keep them in discipline since they can't find motivation to be disciplined just from the importance of results they could be providing. An experienced tester that is an experienced machine part can be replaced with automation or someone who could do the work for cheaper. This could be the developers - just to save the cost of teaching the same things to yet another person who would not provide value for the invested time - or someone from lower cost countries.
I believe that you don't need testers in scrum environment but you need testing. It is not straighforward that the team is able to include the testing as it should be included if there is no specialist in the topic. Then again, having someone called a tester, or even having someone who has been a tester comparing expected to the seen, does not mean that person could actually help include the right kind of understanding in the team.
In some cases removing testers makes things better for the team in my experience. It helps the other team members take responsibility over quality, it makes the team start automation they've too long postponed, it makes them stop building fences over their own component and work together with other developers. While it may make them seemingly slow at first, they may recover fast and become better.
In other cases removing testers makes things bad - when the team left to do the work is not willing nor capable to do the testing. Things get declared done too soon and problems going further in the chain may increase.
I find that the potential value of testers in scrum comes from the testers' potential of thinking and acting like a tester - providing information that was not yet known, on time, in a way that saves time overall.
Being called a tester does not make one a useful tester.
Subscribe to:
Posts (Atom)