Friday, November 9, 2012

Thinking through what sticks with me from Rapid Testing Intensive

I participated end-of-October - as an usual student of testing - on a training, that I was really not sure what exactly to expect of: Rapid Testing Intensive by James Bach in Helsinki. The course outline said we'd be testing something for two days and learn from this mini-project. Someone tweeting about this gave me an impression this would be RST with just one software, seeing how it works in real projects, so I wasn't really sure what that meant. It could mean, we focus on the bunch of heuristics and trying those out in practice, or we focus on the doing testing and understanding the flow of it. I feel latter was closer to what happened.

To sum up what I remember we were taught:
  • Testing actions recipe to cook from: intake, survey, analysis, setup, deep coverage, closure - where any of the types of actions could be a focus of a session, there could be mix of two, and the actions could take place in orders other than assuming there is an order.
    Over the two days tried out different types of activities: asking about the project, finding out what the product could be about by testing it, using help files in testing,  making a test strategy to focus our testing with risks and specific testing activities we could do, tried working with generated data, and reported our status and results. 
  • Talking about testing, the difference between a professional (focused for the audience) and a comprehensive test report, and in particular the three levels of reporting testing: status of the product, show it has these bugs (what you did and didn't test), why the chosen testing and how we know if it was good.
By the end of the course, I was more inside my head thinking of how to structure the mess better than with the activities people could imagine they view from the outside. I stopped to think about how I think and feel, and how my choices and somebody else's choices would be different. I realized that I personally dislike the concept of "emergency testing" of something I have no previous experience on. When time is way too little for doing anything professional, I have the tendency to focus on playing time - just finding something that would delay the release. And when I feel that there's nothing, for this particular context that would buy time, I noticed I realize what I should have done too late - when we're already out of time.

We tested XMind, a released version. While the course is half make-belief of an end emergency testing situation, I couldn't help but thinking that this is what they have already released. Would any of the bugs we currently have in production actually made a difference to timing - perhaps not. And if not, what's the rush?  Remembering which parts of context are imaginary for course purposes and which parts would actually be things happening with that particular product and its release decision got me confused a little.

Since I did not want to miss out on my notes of what was said and what we were told, I used a lot more of our testing sessions wondering somewhere else in my thoughts, and actually testing the product like it was real project. That was my choice, to take time from learning and digesting. I probably went to find our a bit different things than others, my main point of curiosity was not towards how would I test better, but how do others teach this.

A great course, all in all. So much like my exploratory testing work course, expect that personal preferences make us focus on quite different aspects. It was like comparing coaching styles with a true expert - without saying that out loud. Only thing I regret is not making the point of being there with my team's developers - they would have learned so much.