Thursday, November 21, 2019

Tools Guide Thinking

I have spent a significant chunk of my day today in thinking about how exploratory testing is the approach that ties together all the testing activities even when test automation plays a significant role. Discussing this with my team from every developer to test automation specialists to the no-code-for-me-tester, finding a common chord isn't hard. But explaining it to people who have a different platform for their experiences isn't always easy. Blogging is a great way of rehearsing that explanation. 

I frame my thinking today around the idea I again picked up from Cem Kaner's presentation on Exploratory Testing after 23 years - presented 12 years ago. 
"Tools guide thinking" - Cem Kaner
Back then, Cem discussed tools that would support exploratory thinking, giving examples like mindmaps and atlas.ti. But looking back at that insight today, the tools that guide a rich multi-dimensional thinking can be tools we think of as test automation.

We have a tool we refer to as TA - short-hand for Test Automation. It is more than a set of scripts doing testing, but it is also a set of scripts doing testing. To shortly describe the parts:

  • machinery around spawning virtual environments 
  • job orchestration and remote control of the virtual machines
  • test runners and their extensions for versatile logging
  • layers of scripts to run on the virtual environments
  • execution status, both snapshot and event-based timelining
Having a tool like this guides thinking. 

When we have a testing question we can't see from our existing visualizations, we can go back to event telemetry (both from product and TA) and explore the answers without executing new tests. 

When we want to see something still works, we can check the status from the most recent snapshot automatically made available. 

When we want to explore on top of what the scripts checked, we can monitor the script real time in the orchestration tooling seeing what it logs, or remote to the virtual machine it is running and watch. Or we can stop it from running and do whatever attended testing we need. 

We can explore a risky change seeing what the TA catches and move either back or forward based on what we are learning. 

We can explore a wide selection of virtual environments simultaneously, running TA on a combination we just designed. 

We want a fresh image to test on without any scripted actions going on, we take a virtual environment which is at our hands ready to run in 2 seconds it takes to type it into a remote desktop tool. 

It makes sense to me to talk about all of this as exploratory testing, and split it to parts that are by design attended and unattended. A mix of those two extends my exploration reach. 

With every test I attend to either by proactive choice or reactive choice being called in by a color other than blue (or unexpected blue knowing the changes), I learn after every test. I learn about the product and its quality, about the domain and for exploratory testing most importantly, I learn about what more information I want to test for. 

Tool guides my thinking. But this tooling does not limit my thinking, it enables it. It makes me a more powerful explorer, but it does not take away my focus on the attended testing. That is where my head is needed to learn, to take things where they should go. Calling *this* manual is a crude underrepresentation of what we do. 

4 comments:

  1. Thank you for writing about this more. Its language I've been thinking about a lot since you tweeted about it.

    We spend a fair bit of time exploring this during Automation in Testing. I've been thinking a lot recently about oracles, and oracles being key in tool categorisation for me. There is usually an oracle present in the unattended testing, an oracle that is codified. Codified by someone who believes this area is important, and believes they've designed a good oracle. However, we have different types. Doug Hoffman calls the common type 'true oracles', which is the most common we see, and something I declare as a 'check'. But those have to come from some information gathering, which I categorise as ET. It's very rare we implement them blinded from docs/stories, usually, someone 'tests' it at the time of creating it before submitting into the repo/pipeline. So, they've attended and acknowledge it good enough to throw into the unattended mix. Therefore when they 'passes/fails', or detects or doesn't detect change, we act accordingly, usually with the biases that something must be broken. Then we have other types of oracles such as Heuristic or Consistent, where we perhaps are checking a wider thing such as a golden master/baseline, where we tend to think curiously about the 'pass/fail' are keen to see the difference.

    In all occasions though, I agree, we should be using them to think. Is the fact it didn't detect any change a good thing? Stuff changed, why are all the checks happy? They did detect change, excellent what did they find. Oooo that's interesting, why didn't we explore that earlier? Or, I never thought that would impact that, I wonder if it also impacts XYZ?
    The putting of automated checks/unattended testing into context is something that isn't spoken about enough. Far too many teams think green means a'ok, when it means no change detected, which makes me really nervous sometimes!

    However, then we have the category of 'test automation' where they isn't an oracle. Where we've simply made tools to support the explorer. So, that they can truly explore all their thoughts and ideas, no matter how out there. These tools allow the explorer to explore quicker, deeper, wider, increasing the amount of information they can gather, which can then be used to facilitate decision making, and potentially lead to more unattended testing/checks. That will help the team in the future. These tools facilitating that awesome thinking. We should be providing explorers with a whole host of tools, and SDETs/developers/toolsmiths or whoever can where that hat should be building them all the time. So polished and snazzy, and many just throw away little scripts that facilitate the current thinking.

    Sorry for the ramble! Basically, I agree, and believe we need to be reading and seeing more on this, so thanks again for writing it.

    ReplyDelete
    Replies
    1. What you essentially say, paraphrasing, is that too many teams don't understand that test automation done well is exploratory testing.

      Scripted testing approach to test automation without the learning loop is dangerous. And very very popular.

      So is bad exploratory testing. Bad as in testers don't know how to test but drop test cases thinking that is all there is. Bad as in not leveraging all levels of test automation and all brilliant minds we work with.

      I also have my take on the oracles and should publish it. We are regularly unhappy with how they don't reveal a problem or a change and make them better. They are in flux. They are partial. A script failing to run with no asserts defined already is a partial oracle. I don't believe the way you are making the distinction is necessary.

      However, the question that interests me most now: how do we effectively teach people? and how do we use test automation as a tool that teaches people?

      I'm not into definitions. At all. I'm a practitioner, who is moving her company and team in the way the lifecycle cost structure is. I am no longer solving testing, I am solving happiness, collaboration and information relevant to businesses.

      Delete
  2. Your last paragraph is very important. After all, there is another interpretation of "Tools guide thinking"; "When all you have is a hammer, all problems look like nails". Having only one tool - TA - can reduce testing to an exercise in "how do we apply the automated tests to this instance?"

    Better by far to have a toolkit and to use the most appropriate tool for the situation. As you said, that's what makes us powerful explorers.

    ReplyDelete
    Replies
    1. Here's the thing: TA is not one tool. It is a collection of tools. It is extendable. It is built in an exploratory fashion to help us explore.

      There are days when I decide to not use any tools to force myself to think differently. That is an act of exploration into how it changes me. And I come back knowing what new I learned, critically assessing how I can turn that into something that I can use later. If I want it continuously monitored, all I need is documenting it - in script. Script is baseline, not the final result. Baseline changes every step of the way.

      Delete