Wednesday, April 26, 2023

Stop Expecting Rules

Imagine having a web application with a logout functionality. The logout button is on the top LEFT corner. That's a problem, even if your requirements did not state that users would look for logout functionality on the top RIGHT corner. This is a good example of requirements (being able to logout) and implicit requirements (we expect to find it on the right). There are a lot of things like this where we don't say all the rules out loud. 

At work, we implemented a new feature on sftp. When sftp changes, I would have tested ftp. And it turns out that whoever was testing that did not have a *test case* that said to test ftp. Ftp was broken in a fairly visible way that was not really about changing sftp, but about recompiling C++ code to make a latent bug of 2009 now visible. Now we fixed ftp, and while whoever was testing that now had a test case saying to testing sftp and ftp as a pair, I was again unhappy. With the size of the change to ftp, I would not waste my time testing sftp. Instead, I had a list of exactly 22 places where we had the sonar tool identify problems exactly as this used-to-be-latent one, and I would have sampled some of those in components we had changed recently. 

Search of really simple rules fails you. If you combine two things for your decision, the number of parameters is still small. 

In the latter case, the rules are applied for a purpose of optimising opportunity cost. To understand how I make my decisions - essentially exploratory testing - would require balancing the cost of waiting another day for the release, the risks I perceive in the changes going with the release, the functional and structural connections in a few different layers. The structural connection in this case had both size and type of the change driving my decisions on how we would optimize opportunity cost.

I find myself often explaining people that there are no rules. In this case, it would hurt timelines for maybe a few hours to tests ftp even when those few hours would be better used elsewhere. The concern is not so much on the two hours wasted, but on not considering the options of how that two hours could be invested - on faster delivery or on better choice of testing that supports our end goal of having an acceptable delivery on the third try. 

A similar vagueness of rules exist with greeting people. When two people meet, who says hello first? There are so many rules, and rules on what rules apply, that trying to model it all would be next to impossible. We can probably say that in a Finnish military context, the rank plays to the rules of who starts and punishment of not considering the rules during rookie season is teaching you rule following. Yet the amount of interpretations we can make on other's intentions when passing someone at the office without them (or us) saying hello - kind of interesting sometimes. 

We're navigating a complex world. Stop expecting rules. And when you don't expect rules, your test cases you always run will make much less sense than you think they do. 

 

Saturday, April 15, 2023

On Test Cases


25 years ago, I was a new tester working with localisation testing for Microsoft Office products. For reasons beyond my comprehension, my 1st ever employer in IT industry had won localisation project for four languages and Greek was one of them. The combination of my Greek language hobby and my Computer Science studies turned me into a tester. 

The customer provided us test cases, essentially describing tours of functionalities that we needed to discover. I would have never known how certain functionalities of Access and Excel work without those test cases, and was extremely grateful for the step by step instructions on how to test. 

Well, I was happy with them until I got burned. Microsoft had a testing QA routine where a more senior tester would take exactly the same test cases that I had, not follow the instructions but be inspired by the instructions and tell me how miserably I had failed at testing. The early feedback for new hires did the trick, and I haven't trusted test cases as actual step-by-step instructions since. Frankly, I think we would have been better off if we described those as feature manuals over test cases, I might have gotten the idea that those were just a platform a little sooner. 

In years to come, I have created tons of all kinds of documentation, and I have been grateful for some of it. I have learned that instead of creating separate test case documentation, I can contribute to user manuals and benefit the end users - and still use those as inspiration and reminder of what there is in the system. Documentation can also be a distraction and reading it an effort away from the work that would provide us result, and there is always a balance. 

When I started learning more on exploratory testing, I learned that a core group figuring that stuff out had decided to try to make space between test cases (and the steps those entail) by moving to use word charter, which as a word communicates the idea that it exists for a while, can repeat over time but isn't designed to be repeated, as the quest for information may require us to frame the journey in new ways. 

10 years ago, I joined a product development organization, where the manager believed no one would enjoy testing and that test cases existed to keep us doing the work we hate. He pushed very clearly me to write test cases, and I very adamantly refused. He hoped for me to write those and tick at least ten each day to show I was working, and maybe if I couldn't do all the work alone, the developers could occasionally use the same test cases and do some of this work. I made a pact of writing down session notes for a week, tagging test ideas, notes, bugs, question and the sort. I used 80% of my work time on writing and I wrote a lot. With the window to how I thought and the results in the 20% time I had for testing, I got my message through. In the upcoming 2,5 years I would find 2261 bugs I would report that also got fixed, until I learned that pair fixing with developers was a way of not having to have those bugs created in the first place. 

Today, I have a fairly solid approach to testing, grounded on exploratory testing approach. I collect claims, both implicit and explicit and probably have some sort of a listing of those claims. You would think of this as a feature list, and optimally it's not in test cases but in user facing documentation that helps us make sense of the story of the software we have been building. To keep things in check, I have programmatic tests, where I have written some, many have been written because I have shown ways systems fail, and they are now around to keep the things we have written down in check with changes. 

I would sometimes take those tests, and make changes on data, running order, or timing to explore things that I can explore building on assets we have - only to throw most of those away after. Sometimes I would just use the APIs and GUIs and just think in various dimensions, to identify things inspired by application and change as my external imagination. I would explore alone without and with automation, but also with people. Customer support folks. Business representatives. Other testers and developers. Even customers. And since I would not have test cases I would be following, I would be finding surprising new information as I grow with with the product. 

Test cases are step by step instructions on how to miss bugs. The sooner we embrace it, the sooner we can start thinking about what really helps our teams collaborate over time and changes of personnel. I can tell you: it is not test cases, especially in their non-automated format. 

Tale of Two Teams

Exploratory testing and creating organizational systems encouraging agency and learning in testing has been a center of my curiosity in testing space for a very long time. I embrace title of exploratory tester extraordinaire assigned to me by someone whose software I broke in like an hour and half. I run exploratory testing academy. And I research - empirically, in project settings at work - exploratory testing.

Over the years people have told me time and time again that *all testing is exploratory*. But looking at what I have at work, it is very evident this is not true. 

Not all projects and teams - and particularly managers - encourage exploratory testing. 

To encourage exploratory testing, your focus of guidance would be on managing performance of testing, not the artifact of testing. In the artifacts you would seek structures that invest more in artifacts when you know the most (after learning) and encourage very different artifacts early on to support that learning. The reason that the most known examples of managing exploratory testing focus on new style of artifacts: charters and session notes over test cases. The same idea of testing as performance (exploratory testing) and testing as artifact creation (traditional testing) repeats in both manual-centering and automation-centering conversations. 



Right now I have two teams, one for each subsystems of the product I am managing. Since I have been managing development teams only since March, I did not build the teams, I inherited them. And my work is not to fix the testing in them, but to enable great software development in them. That is, I am not a test manager, I am engineering manager. 

The engineering culture of the teams is very essentially different. Team 1 is what I would dub 'Modern Agile', with emergent architecture and design, no jira, pairing and ensembling. Team 2 is what I would dub as 'Industrial Agile', with extreme focus on Jira tasks and visibility, and separation of roles, focusing on definition of ready and definition of done. 

Results from the whole team level is also very essentially different - both in quantity and quality. Team 1 has output increasing is scope, and team 2 struggles to deliver anything with quality in place. Some of the differences can be explained by the fact that team 1 works on new technology delivery and team 2 is on a legacy system. 

Looking at the testing of the two teams, the value system in place is very different. 

Team 1 lives with my dominant style of Contemporary Exploratory Testing. It has invested in baselining quality into thousands of programmatic tests run in hundreds of pipelines daily. Definition of test pass is binary green (or not). The programmatic tests running is 10% of effort, maintenance and growing of them is a relevant percentage more but in addition we spend time exploring with and without automation, documenting new insights again in programmatic tests on lowest possible levels. Team 1 had first me as testing specialist, then decided on no testers but due to unforeseeable circumstances have again a tester in training participating in the team work.  

Team 2 lives in testing I don't think will ever work - Traditional Testing. They write plans, test cases and execute same test cases in manual fashion over and over again. When they apply exploratory testing it means they vanish from regular work for a few days, do something they don't understand or explain to learn a bit more on a product area, but they always return back to test cases after the exploratory testing task. Team 2 testing finds little to none of bugs, but gets releases returned as they miss bugs. With feedback of missing something, they add yet another test case to their lists. They have 5000 test cases, and run a set of 12 for releases, and by executing the same 12 minimise their chances of being useful. 

It is clear I want a transformation from Traditional Testing to Contemporary Exploratory testing, or at least to Exploratory testing. And my next puzzle at hand is on how to do that transformation I have done *many* times over the years as the lead tester as a development manager. 

At this point I am trying to figure out how to successfully explain the difference. But to solve this, I have a few experiments in mind:
  1. Emphasize time on product with metrics. Spending your days in writing of documentation is time away from testing. I don't need all that documentation. Figure out how to spend time on the application, to learn it, to understand it, and to know when it does not work.
  2. Ensemble testing. We'll learn how you look at an application in context of learning to use it with all the stakeholders by doing it together. I know how, and we'll figure out how.