Saturday, September 26, 2020

A Step-Wise Guiding into Automation

Looking at a group of testers struggle with automation, I listened to their concerns:

    - Automating was slow. When automating, it would easily take the whole week
      to get one thing automated.

    - Finding stuff to automate from manual test cases was hard. It was usually a step, 
      not the whole thing that could be automated.

    - It was easy to forget earlier ideas. Writing them down in Jira was encouraged, but 
      Jira was where information goes to die. If it didn't die on its way like often happened.

I'm sure all their concerns and experiences were true and valid. The way the system of work had been set up did not really give them a good perspective of what was done and not, and things were hard. 

In my mind, I knew what was expected of the testing they should do. Looking at the testing they had done, it was all good, but not all that was needed. Continuing exactly as before would not introduce a change we needed. So I introduced an experiment. 

We would, through the shared test automation codebase, automate all the tests we thought we could document. No separate manual test cases. Only automated. We would split out efforts so that we could see coverage though the codebase adding first versions of all tests that were quick to add and then add actual automation into them a test at a time. Or even, a step of a test at a time if it made sense.

We would refactor our tests so that mapping to manual tests to automated tests was not an issue, as all tests were targeted to become automation. 

None of the people in the team had ever heard of an idea that you'd create tests that had only name and a line of log but agreed to play along. Robot Framework, the tool they were already using, made this particularly straightforward. 

Since I can't share actual work examples, I will give you the starter I just wrote to illustrate the idea on documenting like this while exploring, using prime from eviltester as a test target. 

*** Settings ***
Documentation Who knowns what, just starting to explore

*** Test Cases ***
Example test having NOTHING to do with what we are testing but it runs!
[Tags] skeleton
Log Yep, 1/1 pass!

This is already an executable test. All it does is that it logs. The name and log message can convey information on the design. Using a tag shows numbers of these in the reports and logs. 

Notice that while the Documentation part of this identifies my test target, there is actually nothing automated against it. It is a form of test case documentation, but this time it is in code, moving to more code, and keeping people together on the system we are implementing to test the system we have. 

As I added the first set of skeleton tests and shared them with my team, they already were surprised. The examples showed them what they were responsible for, which was different from their current understanding. And I designed my placeholders already in a way that can be automated. I had placeholders for keywords that I recognized while designing, and I had placeholders for test cases. 

Finally, at work, I introduced a four level categorization of "system tests" in automation:

Level 1 is team functionality on real hardware. 

Level 2 is combining level 1 features 

Level 3 is combining level 1 for user relevant flows

Level 4 is seeing the system of systems around our thing. 

The work on this continues at work, and the concreteness of this enables me to introduce abilities to test the team may have assumed they can't have. Also, it enables them to correct me, a complete newbie to their problem domain on any of the misunderstandings I have on what and how we could test. 

The experiment is still new, but I am also trying it out with the way I teach exploratory testing. One of the sessions I have been creating recently is on using test automation as documentation and reach. With people who  have never written any automated tests, I imagined browser tests in Robot Framework might do. They do, but then it takes the time from testing to tool learning. Now I will try if the first step of creating skeletons enables a new group to stick with exploration first, and only jump into a detail of automating after. 

Introducing Exploratory Testing

 "All testing is exploratory". 

I saw the phrase pop up again in my good friend's bio, and stopped to think if we agree or disagree.

All testing I ever do is exploratory. All testing that qualifies as good testing for me is exploratory. But all testing that people ask from me definitely is not exploratory.

I worked at one product company on two separate batches of about 3 years each, and with 10 years in between. If there is a type of company that needs good, exploratory testing to survive, product companies are this. The whole concept originated 35 years ago from Silicon Valley product companies Cem Kaner was working in. Yet when I first joined, exploratory testing was something we did after test cases.

We wrote test cases, tracked running of those test cases, and had some of the better tooling in continuously moving test target version with our Word-Excel in-house tooling. What made the tooling better than any of the ones I have now in my use was the in-built idea of primarily needing to understand when you last verified a particular test idea, since every change effectively cancels out results of all the things you've done before. 

On top of test cases, we were asked to do exploratory testing. We were guided to test our tests so that we did not obey the steps, but introduced variance. We were guided to take a moment after each test to think what we would test because of what we had just learned, and do it. We were guided in taking half-a-day every week in just putting the test cases aside and testing freely. 

It was clear that all testing was not exploratory testing.

10 years later, there was no test cases. There was one person who would do "exploratory testing" meaning they would follow user flows to confirm things without documentation, without any ability to explain what they were doing other than rare bugs they might run into, missing out a lot of problems. And then there was test automation that codified lessons of how to keep testing continuously, discovered through detailed exploration finding problems. 

It was clear that testing they called now exploratory testing was not exploratory testing. It was test automation avoidance. And the testing they called test automation was the real exploratory testing. 

I get to go around companies and inside companies, and I can confirm that we are far from all testing being exploratory. We have still tyranny of test cases in many places. We will have test automation done without exploring while doing it, taking away some of its value. 

We have managers asking for a "planned" approach to testing, asking for "requirements coverage". They ask those as a proxy to good testing, and yet in asking those, often end up getting worse testing, testing that is not exploratory. 

Opposite to exploratory is confirmatory. It does not seek the holes between the things you realized to ask for, but only things you realized to ask for. And you want more than you know to ask for. 

So I keep going to companies, to introduce exploratory testing. 

I bring down test cases, I move documenting tests to automation. 

I convince people to not care so much for who does what, but work together to learn. Developers are great exploratory testers if only they let go of the recipe where they do what they were told, and own up to creating a worthwhile solution. 

I break ideas of exploratory testing as something you do on top of other things testing. I introduce foundations of confirming what we know being true and learning, and then building figuring out unknown on top of it. 

We read requirements, we track requirements, we use them as testing starters. But instead of confirming what is there, we seek to find what is not. 

All testing will not be exploratory testing while we write and execute tests cases. All testing may have an exploratory angle. But we can do better. 

Monday, September 21, 2020

Exploratory Testing and Explaining Time

If you've looked into exploratory testing, chances are you've run into two models of explaining time.

The first one is around the observation that there are four broad categories of "work" when you do exploratory testing, and only one of them is actually taking your testing forward, and thus visualizing the portion of this may be useful. In that model, we split our time to setup, test, bug and off-charter. Setup is anything we do to get ready to test. Test is anything we do to amp up coverage from just starting to getting close to completion. Bug is when we get interrupted by reporting and collaboration on results of testing. And off-charter is when we don't get to do testing but to exist in the organization that contains the testing we do. 

The broad categories of work model has been very helpful for me explaining testing time to people over the years. It really boils down to a simple statement: Getting to coverage takes focused time, and if we report the time on it, you may have an idea of testing progressing. Let's not measure test cases or test ideas, but let's measure time that gives us a fighting chance of getting testing done. 

The three other categories of time use outside "test" are set up as the possible enemy. Setup takes time - it's away from testing! Finding many bugs - not only away from testing, but requiring to repeat all testing done so far! Off-charter - you're having me sit in meetings! They can also be set up as questions to facilitate a positive impact on the "test", as in investing in setup that makes test time be sufficient, or investing in pairing on bugs that make future bugs less frequent. 

The second model making rounds includes a more fine-grained split of activities happening within exploratory testing sessions, that people could even use to explain their sessions for things like daily meetings. Instead of saying you are doing "testing" day after day for that big login feature, you could explain your focus with words like intake (asking around for information and expectations), survey (using the software to map, but not really test), setup (infra and data specifically), analysis (creating a coverage outline), deep coverage (get serious testing done), closure (retesting and reporting). 

If we map these activities to the four categories, there's a lot of explanation for setup here: intake, survey, setup, analysis and closure are all mostly setup - they don't really build up coverage, but are necessary parts of doing testing properly. 

While the first model has been valuable for me in years of use, I would replace the latter model by finding the words that help you communicate with your team. If these words help, great. If these words silence your team members and create a distance of them not understanding your work, not so great.

The words I find myself using to explain how I progress through a change / feature related exploratory testing are:

  • what am I investing in: in the now, or for later; getting the job done quick vs. enabling myself and others in the future
  • what kind of outputs I'm generating: story of my testing, bug reports, mindmap, executable specifications 
  • what kind of output mindset my work has: generative or completion-oriented; some work generates more work, some gets stuff done
  • whether you see movement: working vs. introspecting; some work looks like hands on keyboard, other look like staring at a whiteboard
For me it is important to add more words to "test" too: mapping, acquiring coverage, completing as well as for "bugs": isolating, documenting, demonstrating. 

Looking at the way I work, I explain very little of testing in daily meetings and don't write a report of any kind. Documentation I leave behind is automation. For transferring deeper knowledge, I pair with people, and to get new people started, I write a one page playbook of testing that sets the stage. The people I explain testing to are ones I pair with for either automation or for doing particular testing. 

The real question of explaining your time is: Why are you doing it? Do you grow others with it? Do you explain yourself to people who hired you for them to trust you? Or maybe you explain it to yourself  as introspection, to figure out how things could be different for you tomorrow. 

Sunday, September 20, 2020

There is Non Exploratory Testing

Celebrating my personal 25 years of growth as an exploratory tester doing exploratory testing either all or much of my work time, I regularly stop to think what makes it important to me. 

It has given my results as a tester a clear boost over the years. The better I am at that, the more magical the connections of information I can make seem to people who are not as practiced at it. I see everything as a system - people, working conditions and constraints, the software we test, the world around us. And as complex systems, I know I can design probes to change them, but I can'd design the change exactly. 

Exploratory testing is how I can get more done with less time and effort. 

On side of appreciating how it helped me grow, I look at people around me, some of which are now growing on their own paths, and some which have grown tired but do testing as it is all they know for a career. I recognize plenty of non exploratory testing around me, even if we like to say think that all (good) testing is exploratory. Not all testing is exploratory testing. You can use the very same techniques in testing in a non exploratory and exploratory fashion. Some people still box exploratory testing into that one Friday afternoon a month when they let go of their stiff harnesses and see what happens when it's just a person with the software created, on a quest to learn something new worth reporting.

Exploratory testing is testing with mutually supportive set of ideals and practices. Today, I want to talk about the four ideals that I seek to recognize testing as exploratory testing.  


At core of exploratory testing is learning. And not just learning every now and then, but really centering learning, focusing on learning, and letting learning change your plans. 

As you are testing, you come to an idea. Perhaps what you were doing now is tedious or boring, and the negative feelings you had let your subconscious roam free and you remember something completely different and connect it with your application. Perhaps what you came to realize was exactly about what you were observing - asymmetry in functionalities, or feeling that you've seen something like this before. 

When learning guides you, now you have a choice: you can park the new idea - maybe make a note of it, you can do the new idea - letting go of what was ongoing now, or you can discard it. 

The learning impacts you in the moment, in your short term plans, in your long term plans. It impacts what and how you test, but also what and how you communicate to others. The mindset of learning has you thinking about yourself, your abilities and your reactions, creating tests to yourself. Experimenting new ways that don't come natural, seeing if what you believe of yourself is true, and becoming a backbone on a journey to be a better person. 

The tool you're sharpening through learning is you and other people around you. 

When you see testing efforts with very little learning happening, test cases being repeated, and focus on reapplying recipes with a low quality retrospects and lack of introspection, you are likely not doing exploratory testing. 

When you're being trained to become an exploratory tester, you may see it as you're not given the answers. You're taught to figure out the answers. You're not given a test case with expected values, you're told there is a feature in your software that you need to figure out if it works. You're expected to turn that work that appears 5 minute work to 5 hour work and 5 days work by understanding how it it connected to information and value in software creation. All work of testing has surprising depth to it, and it is easy to miss. 


Another significant ideal in exploratory testing is agency. A word that does not even translate to my own language has become a core of the way I think of exploratory testing. In sociology, agency is defined as the capacity of individuals to act independently and to make their own free choices. Free choice is not exactly what we give to testers in organizations as per common experience. Testers and testing is often very constrained activity, like a project manager of my past once told me: "No one in their right mind could enjoy testing. Marking test cases done daily is the only way to force people to do it.". 

Exploratory testing isn't fully free choices, but it's making the testing box of choices free and adding agency to other choices testers can do as full members of their work communities. 

Agency is constrained by structure, such as social class (power assigned to testers), ability (skill of exploratory testing, skill of programing, skill of business domain understanding...) and customs (tester's don't make decisions, testers don't program). Exploratory testing is a continuous act of changing those structures to enable best possible testing. 

And obviously, best possible testing is a journey we are on, not a destination. The world around us changes and we change with it. 

Not everyone in current organizations has agency. Testers are considered low social class somewhere. Testers are not given much choices in how testing is done, process determines that. Testing without testers does not include much choices, process determines that. Our organizations become constrained to our test cases, barely adding per recipe and keeping added tests afloat. That is not exploratory testing. 

For me agency explains what draws me to exploratory testing. Having agency, and allowing others around me agency is my core value. I frame it as being a rebel, finding my own path, and ensuring that while serving the world I am in, I'm a free agent, not an item. 

Opportunity Cost

Either learning or agency don't yet capture the constraint in the world of exploratory testing. We are all constrained by the time we have available. Opportunity cost is the ideal that we respect our limited time, and use it for best possible outcome. We need to make choices between what we do, and whatever we choose to do, leaves out something else we could not do on that time. Being aware of those choices and making those choices in a way where we also see the cost of things we don't do is central to exploratory testing. 

We can spend time hands on with the application or we can spend time creating a software system that does testing of our application. Both are important, valuable activities. We intertwine them under whatever constraints we have in the moment optimizing for the cost towards value. 

Not all organizations allow testing to consider opportunity cost. The experiment is set somewhere else. The skills constrain possible choices, and changing skills isn't considered a real possibility. 

Systems Thinking

Finally, the fourth ideal around exploratory testing is systems thinking. Software my team creates runs with software someone else created, and our users don't care which of us causes the problem they see. A lot of times teams have developers feeling responsible for their own code, but testers are responsible for the system. Software is part of something users are trying to achieve, that too is part of the system. There's other stakeholders. Software does not exist in a bubble. Exploratory testing starts with rejecting the bubble. 

Not all organizations break the bubble for all their teams. They create multiple bubbles and hierarchies. 


I've seen too much of testing that is test-case based, recipe based, founded on bad retrospective capabilities and limited to a scope smaller than it needs, due to more reasons that I can list in this moment. The change, as I see it, starts with agency used for learning.