Saturday, September 2, 2017

We're getting worse at testing

A common theme of what many testers (me included) want to talk about is the negative impacts of automation. I'm at a point where I've definitely come to terms with the idea that automation is a good thing. I've grown to see that my old fight against it was a reason for failing and work hard, most of the time, to work against my natural instinct giving test automation time and focus it needs to become great. Time to fight is time away from improving. And I know that I can help improve it, significantly. 

One of the ways this revelations of mine shows is that within the organizer group of European Testing Conference, we have decided not to accept anti-automation talks. We all know it has limits and negatives. But we want our focus to be on finding ways around the problems, practical solutions, insights and ways forward. 

One of the calls with four proposals included a talk that I felt belonged in the category we wouldn't feel like giving stage to, yet the even short discussion with Jan-Jaap Cannegieter was inspiring. He introduced me to a book by Nicholas Carr called Glass Cage  and it's core message of how automation is making us more stupid, forgetting how to do the things without automation.

I work with a team highly divided on our focus on automation. And a regular discussion with the person focused on testing through automation is on *how are we testing this*. The discussion as such is not the interesting part, but the pattern of how that discussion goes. The reliance on code to see what it does, the inability to talk on level of concepts or even remember what has been covered on high level without looking at the code is evident.  The same question asked on ones with exploration approach starts with areas, features and only last details that could or could not be documented.

It would seem tempting to say that automation is making us stupid. It would feel tempting to say it reduces our ability to see our testing, and to explain our testing conceptually, while adding to ability to cover our asses showing the exact detail of what is covered that I personally find the least relevant. 

Jan-Jaap made a point of us forgetting with the extensive focus on automation how to talk about coverage and test techniques. Yet, I just few days ago had a fascinating and insightful discussion on someone else submitting on Test-Driven Development, giving insightful examples of how TDD has made them test with several positive tests and cover more ground of the actual solution domain. 

So have we forgotten what it is to test? Where do the new generation of automation first testers learn that? Clearly many of them haven't, and get very easily fooled by opportunity cost of doing the best thing possible only in the automation context optimizing for long-term. 

Then again, looking at the 120 submitters and 200 topics proposed for European Testing Conference. Not a single one on a practical use of a test technique to analyze a problem for coverage. Not a single one teaching how you test. We found some hidden in the talks of process and company experience, but not on the active submissions.

Perhaps the problem isn't automation. Perhaps it's the way we talk of testing - as in not talk of it. With a few notable differences.

Share more of how you test. That is valuable and interesting. It's not automation that makes us worse at testing, it'd our choice of letting the automation (and the programming problem of it) to take all focus and stopping our talk around the domain: how do we test. 

My hope with automation is on the programmers who no longer need to use their learning power on the details of scripting, solving (through learning) the higher level problems of domain. But every day, I feel less inclined to believe in the tester-turned-automators. They need to amp up learning in a balanced way to restore my faith. 


  1. Jez Humble gave some compelling case studies of value from test automation in his #Agile2017 keynote. One was from a book about HP Futuresmart - after successfully implementing CD, the teams spent 23% of their time maintaining automated tests - that sounds like a lot. But their time for *innovation* went from 1.5% to 40%. That's a pretty good thumbs up for test automation.

    The teams I've been on where we succeeded with test automation - meaning, we had a good return on investment and our automated tests protected us from regressions in prod, and left us lots of time for testing activities such as exploring - testers and programmers collaborated on automated tests. Usually that was through a process of ATDD/BDD.

    I'm on a team now that has like 60k automated tests. Sounds great! But they were all written by programmer pairs - no input whatsoever from testers. We have a lot of serious regression failures in master and sometimes on prod. We are trying to change that now by having testers pair with programmers, and by having programmers learn and practice exploratory testing skills. It's getting better, but without ATDD/BDD, I am not sure if we'll get where we want to be. At least we are acknowledging the pain, and trying experiments.

    1. I'm wondering Lisa - if your last example of 60K auto tests is part of TDD or ATDD/BDD, as it sounds from the amount of it - like a huge amount of "Atomic" test cases, rather than regression kind "E2E" test cases.
      I'm not sure TDD auto test cases are really useful when it comes to regression.

  2. Hi,
    Nice article but you missed some points. Biggest problem with testing and tests automation is not at the level of testers and programmers. The problem lays at the management level of almost all software houses. It doesn't matter if it's a small company or a corporation. Almost in all cases when manager has a team dedicated to solve some issue or build new feature they tend to reduce cost of it by reducing time needed to do it.It cause to reduce time for testing because testing should only prove that solution is working. It's why all managers and teams loves test automation, because it was designed and solves this particular problem. After team has bunch of automated tests, they spend more time on maintaining tests and fixing them and simply don't have time to create new tests and do exploratory testing. The problem comes to light when internal product become a product for external customer. Please have in mind that most o products and teams are just building software for somebody else not as a Saas or something similar where release can be postponed. I know also that everything can be solved somehow but saying that problem is with automation and testers who do not do exploratory testing is a huge simplification of a problem. Root cause of it is somewhere else. We can talk about education for management how important is application security, finding problems with it, etc. but cold calculation is always winning. My idea to solve it is to try to educate management as much as possible but in the mean time prepare more mature automation solutions that will reduce cost on maintaining automated tests so testers will have time for doing exploratory testing.

    1. Thanks for your addition. This is not the experience I have in my current company.

    2. So You are lucky ;) Unfortunately what I have described is a reality in a lots of company (it's not only main experience but also friends of main). Another common problem is that a lot of managers thinks that when they bring Agile methodology on board it will solve problems like an automated test should do.

    3. I know, and I've had that experience too. I just feel I no longer solve the problems of the whole world, but problems I have. And my problem is what I describe: we are getting worse at testing.