Saturday, September 20, 2014

Why would I do regression testing?

There was an evening event and late in the evening we were talking about testing. As usual in the agile circles, test automation creeped in to take the stage. This time though it was formulated perfectly for me to learn something essential.

A developer colleague took a sympathetic approach and told me he would feel bad for if he didn't create automation and in particular he feels he needs to work on automation so that I can focus on testing the new stuff and not repeating the same tests.

I immediately replied from my heart and experience. I have worked on my product for over two years, logged some thousands of issues but so far I have not repeated a single test. Even with the fact that the level of automation in my team isn't created by these aforable, sympathetic agilist developers who care for my well-being.

I realized that reply includes a core of how I deal with testing work. I'm active player in varying my approach. I might press the same buttons but I have different ideas racing through my mind. I don't use the same browser, I don't use the same user, the same data, the same story, the same combination. And I find problems the test automation will never find.

Don't get me wrong, I love having test automation around. I don't expect it to do any testing for me. Instead, I expect it to make my testing work less interrupted with plethora of simple problems automation can catch. Whenever I find a problem, it stops the testing I was doing. But more than automation, I love having active thinking developers around that think and use automation as their safety net, but don't go into relying only on automation. Thinking is the core of it all and sometimes some of that thinking gets packaged into automation.

Regression testing isn't what I do - at all. I'm not sure to what extent we should even talk about doing regression testing. I test, and regression is one of the risks I'm considering to motivate what I do. But there's no specific repeating same tests approach that I would call regression testing.

I've been teaching that tests have 'best before' dates with short expiration dates due to the changes that might cause the need of going back. It's really test results that expire, and the results are not the tests but the information about the system. I seek for same information again, but while at it, I also seek for new info based on the learning that other tests (and discussions, readings etc.) have given me. That's what a tester needs to set their mind to achieve.


  1. This is an interesting approach, My question is how do you satisfy yourself that your previously found bugs do not reoccur at a later date?
    Is it by recreating the same system response with perhaps a different test 'stimulus', or would that be something you would encourage your development team to protect against by writing unit tests covering that potential problem? Or perhaps something completely different.

    Thanks for the thought provoking post :-)

    1. To vary my approach but still to remember, I use reports of bugs to remind me of things that haven't worked in the past. I write an online help for the product that I use to document how things work, and that is also useful as reminder when testing same areas with my mind set to different focus. I have a high-level checklist of test areas that I can use to quickly generate ideas of what to cover. And developers add some - very limited though in my case - automation on unit tests and selenium tests.

      I see testing as active consideration of opportunity cost. What I choose to do means I leave something else undone. I do the best work I can with the time available, and I define the time available as a good investment that provides useful information. I want to find as many relevant problems as I can with the limited time. Repeating same tests (regression tests) isn't the best use when I can simultaneously see regression problems and look at the software from a different angle.

      I don't work in a high-risk-implications environment. We can fix things in production quite easily. I need to satisfy myself only on the idea that my selections of use of our time on testing improve based on what I learn.