Friday, October 30, 2020

A Personal Journey to Exploratory Testing

25 years ago, I became a tester by accident. Looking back, falling into it was like falling in love, and learning about the profession has only grown the bond. I didn't intend to be a tester. 

Localization testing with test cases


I was hired out of first years in university to do testing on the side at a localization agency. Localization testing is a special brand of testing where you can always define correctness from the combination of functionality of the original language reference implementation and basic understanding of the target language. My target language to begin with was Greek, and my level of understanding it was most definitely basic. 


We were given step-by-step test cases to execute, each step including the always default comparison to the original language reference you would run on a second computer, identical to the one you were using for the target language. On top of those test cases, we were told we can invoice a fixed number of hours on something they called *ad hoc testing*. They instructed it came in two styles: freeform ad hoc testing and directed ad hoc testing. In the first, we had no guidance to our testing. And in latter, we would first choose a goal, an area we were looking at, and stick to our commitment of a goal for a period of time. We were asked to report our bugs attributing their source to the style we were using, and ended up with measurements on portions of found in test cases, freeform ad hoc testing and directed ad hoc testing. 


It was only years later, reading Cem Kaner's foundational book *Testing Computer Software* that I learned that the two styles of ad hoc testing were referred to as exploratory testing, and started growing a deeper understanding of the style.


For the first projects that I completed, I loved test cases. I was somewhat clueless on all things the software could and should do. The walkthrough instruction of what to click and what to verify, only extended by detailed comparison of the same functionalities on the two computers, made me feel like I was doing the right thing. I was ticking off test cases. 


I remember one early indication that something was wrong and missing with the approach though. Our customer had a step in their process where they did something they called quality assurance, selecting some of our test cases and using them for directed ad hoc testing. Contractually this quality assurance step was bound with the payment and provided a possible percentual cutter had you missed reporting bugs. And me and my other out of the streets new tester colleagues were often hitting the cutter, not doing as good a job as those doing the quality assurance on our work. 


Functional Testing and Test Design


I changed jobs, and got my first testing job outside localization testing. Now all of a sudden there was no original language reference, but there was a pile of documentation. And there were no ready test cases, but a request to create your own. 


The product company I worked with cared less for me producing test cases, and more for me producing great bug reports. Whatever specifications I was given, I would go try out the claims with the software, and as I was designing tests with the older version of the software, I was already reporting issues. Test cases I wrote became more of an output of me learning the application to support me doing the things again next time, and understanding in detail where the change planned for the project would happen. 


The experience with test design responsibility was like an awakening. Given responsibility to think through what I would test over following someone elses guidelines, I started finding my feet in detective work I would come to know as exploratory testing. My focus moved from following steps to creating my own steps, and leaving something useful behind. I still left behind test cases, as that was all I knew. And many, many bug reports that would result in fixes. 


I remember this period as the time when I realized that you can be respected as a tester by being a great tester, and setting my mind on that path. I would learn and become the best tester I could be. 


Testing Education


In 1999, my university started offering the first ever course on Software Testing. They found an industry teacher, and as someone with some years in the profession on the side of school, I was one of the first to enroll. 


The course was my first connection with the world of testing outside doing what I was told and supported in doing by the companies that I worked for. 


I thought I knew more than I did, and while I passed the course, I did not do great. The course was exercise heavy, and we moved from writing a test plan to writing test cases and to writing test automation with tools of that time. But I worked with testing, I was enthusiastic about testing, and a year later I was given a chance of redoing the course I had passed with another teacher's course assistant. 


The second year of the course with a new teacher, and the course was similar in idea, but very different in my memories. We again went through the project work from test plan to test cases to execution and automation. Following the lecturer's grading guidelines for the exercises solidified my theoretical and academic understanding that test case design was important. It also enabled me to stay fairly oblivious to how I had done my test cases in the functional testing work I was doing, as a result of exploratory testing. 


Becoming the Teacher and a Researcher


Having been a teacher's assistant, I was asked to become the teacher. It fit perfectly the professional growth needs I had identified as someone who was petrified at the idea of having to do a presentation. I needed to rehearse, regularly, and there's no better place for it than showing up to teach something I thought I know. 


The other side of the teaching job was working as a testing researcher. I read anything and everything on testing I could get my hands on. The work bought me books, and I devoured them. Having to teach, I would teach what I knew so I taught importance of testing, test planning and test cases. 


As a researcher, I got to look at companies that I didn't work at, that had other testers. Our research focus was on lightweight approaches for small product companies, and the *Testing Computer Software* book describing ways of testing in Silicon Valley and using exploratory testing became my go to sources. 


I remember this era as the time I tried so hard to learn to talk about testing. However, when I would talk to different professors and explain my area of research, they would usually tell me that the thing I was talking of was not testing but it was project management, risk management or configuration management, depending on what their respective focus was. I also learned to turn the question around for them, to learn they were hoping that instead of this journey of humans and psychology I was on, they'd like me to figure out the formula to coverage of test cases from a specification. 


My research focus ended up being continuous testing, and splitting testing into different level feedback cycles. I was transforming organizations with the ideas of looking at in-sync testing (things you can do in testing on the side of developing) and off-sync testing (things you felt needed to be scheduled separately), and transforming the latter to the first. Just in time for the agile to take over the world and this become a thing we were all doing. 


My defining experience of the time was that anything I was thinking of teaching forward was already written by Cem Kaner. I had also read the works of visible publishing authors of the time, including Glenford Myers, Boris Beizer and Rex Black, to name the people with a heavy impact on my formative years.  


Becoming a Consultant


The work at university did not pay much, and the combination of having done testing, having read about testing, and having taught and researched testing lead to being offered a better paying position in the industry. I got to being a senior consultant, and with my experience with reseach, I soon became someone the consultancy would send first to their new companies. 


I had done my homework on testing, and I had established a good continuous reading and studying regime, and started doing public presentations as a way to find people interested in same kinds of topics. 


My consulting work was usually in three main categories. I would do Test Process Improvement assessments for companies, using that as a form of research into the state of testing in Finland. I would go kickstart testing process improvement and testing projects with various customers, but deemed too important for opening new leads to never be allowed to properly work as tester in any of them. And to get my touch on the testing tool the company was working on, I was assigned a test manager for that product with the limitation that I had to outsource all the work on that one. I was spread thin on everything. 


As I was outsourcing our own product's testing, I got to hire two lovely testers from another company. Funny enough, our consulting was doing so good we could not afford to have any of our own people away from those paying projects. The two testers had taken my testing course at the university, ones I had taught test cases to. As I shared my plan of what work we were buying and how I see it happen in an exploratory testing frame, I remember their surprise as they told me: "Great to see you are doing this in a smart way, and not the way you taught us to do it at the university." 


The insight of realizing the divide on how I continued to teach testing - as it was taught in the books - and the way I continued to guide to do testing in projects become my second foundational revelation. 


The Two Foundational Revelations


In my second tester job, I had come to learn that the results of testing significantly improved if I did not follow test cases - someone else's, nor my own. Finding the idea of agency, my free will and abilities to do the best job in the frame I was given, started to become evident. 


And in my consulting job with a test manager responsibility, I had come to learn that I would not ask people I was expecting to do a good job at testing to create and test with test cases. 


If not before, now it was evident. My work was on exploratory testing. But it would still take 15 more years for this book to emerge.


Becoming a Great Exploratory Tester


Consulting work and opening new leads gave me an unearned reputation as someone who knew testing, but I got to do so little of it that all I knew what how to create a container in which great testing would happen. I taught testing at various organizations, but I didn't really get them (or myself) to test on those course. I was well on my way of becoming a test manager, which did not fit my aspirations of becoming a great tester. 


In escaping consultancy and this particular employers unwillingness to let me work as tester in any of the projects, I thought I would become independent consultant. I believed myself to be on the hight of my testing fame in Finland, and many companies expressed interest in working with me specifically. What I had not counted on was that the consultancy would give me a block list of companies I could not work with, and it included all the companies I had in mind. 


The block was 6 months, and my savings did not allow me to take 6 months off work. When one of the companies on that list offered me a position for those 6 months I couldn't consult, I jumped in and found myself continue for three more years to follow. 


The product company had an exceptional manager and under his guidance my work was defined to include a healthy dose of hands on testing in addition to teaching and enabling others. I worried I might not be good, but set out to learn to be as good as I can ever be. 


The first 6-months project was a great mix of hands-on exploratory testing, documented after testing as test cases I would leave after, and sharing my test ideas with a developer who would turn them into automation. It was a tiny project, and we got to this way of working where I would openly share my ideas of where bugs might lurk and how I would test, and the developer would surprise me by telling I was right, too slow and he had both created little tools to help him test those and fixed the problems. The project manager was puzzled as I had so few bugs to report as a result of our very tight collaboration where my testing resulted in ideas of what more to test, rather than in finding bugs. 


As I stayed with the company, I moved from the tiny project to the main products the company was creating. My past specialty in localization testing made me report huge quantities of localization bugs and a fair share of functionality bugs. I was easily leaning on to what came easy to me, enabling others, pointing out missed perspectives and finding myself avoiding solo responsibility for any of the features. 


Instead, I brought in ideas like continuous releases and worked towards understanding of the importance of testing. I evaluated bug reporting systems, and brought in Jira just before I left. Every training I provided internally, I invited half of attendees from other companies as mixing up perspectives made us all better. 


This era of my career enabled me to do consulting on the side, and most of my consulting was training that paid well. I used the money made on the side on travel, and found myself learning more on exploratory testing with London Exploratory Workshop on Testing (LEWT) as well as traveling to conferences to speak even though the company didn't pay my travel.


Context and How It Matters


With publication of the Kaner et al book *Lessons Learned in Software Testing*, I became curious on the idea of context-driven testing. Having researched and consulted various companies, and managed testing, I started looking for a stretch that would grow me as a tester in the context-driven mindset. 


I started my context-hopping. I did a fair stint of pension insurance sector testing work, both on the contractor and customer side to get a feel of how those differ. I again learned that I was considered too valuable to be allowed to do testing work myself, and expected to lead from an arms length to the real work. I came to learn that having tested myself, I would have made a more significant impact on some moments I cared for. Instead, I supported different teams in moving from test-case based testing to exploratory testing, and introduced ways of doing context-appropriate preparation for exploratory acceptance testing and measuring the impacts of the new style to testing results. 


Again following the context and idea of getting to hands on testing, I now moved to construction sector work on product development, and a team where I was the only tester. I had a manager very reluctant to allow exploratory testing, and I won him over with an experiment of categorizing my notes while testing. The construction sector work allowed me to test with a team, for years, and really evolve my hands-on testing abilities. I also got to evolve the team of developers from folks who had a high presentage of big customer-visible error messages to number of logins to folks who would look at me and know how I would like them to test to find those issues themselves. 


Bigger Circles


Having found my tester feet in addition to my test manager feet, I would again try something different. I joined an organization with extensive automation approach, and started figuring out what the right contribution for someone like me was.


I noticed myself sneaking in improvement ideas one at a time from my long invisible list of things we could try, knowing the team's ability to intake ideas was limited. I would suggest the next important test case. I would suggest the next possible feature split to incremental. And I would suggest the next way to stretch the process for our results to improve. I would test, and find problems early on in features, teaching everyone willing and unwilling on what they could do and what I did do. 


We worked with customers in the millions, with continuous releases to people's personal computers, with a handful of problems the customers would notice. Everything they noticed would improve the way we delivered. 


I came to view exploratory testing as the frame from which we automate. Automation was our chosen pieces of documentation, but exploratory testing was where the insight of relevant ideas was created.


New Ways of Learning and Teaching


In the last five years, I found ensemble programming and ensemble testing. The idea of working as a group on a single computer, learning and contributing to the work, became my go-to method of learning fast from others but also teaching what I had known. I have learned through collaborating in paired an ensembled settings more about exploratory testing and software development, and faster, than I ever did before. We don't know what we don't know, and we can't ask for those things. Seeing them in action transfers the quiet type of knowledge so relevant for excellence in exploratory testing. 


Social Software Testing Approaches became my signature move. 


Wider Circles


I don't believe any of us are ever ready. New products to test and new development teams to build those products with bring us new challenges. Starting from places low on test automation, I went to grow automation without losing any of the exploratory testing value. Instead of working with just one team, I went to working with multiple teams. Instead of transferring a team, I'm now figuring out how to transform organizations. 


One person at a time. One bug at a time. One insight at a time. One impact at a time. 


For me, exploratory testing is *the verb* that reminds me that we do more than follow steps even when given steps. And exploratory testing is *the noun* that reminds me of the frame of  testing an organization needs to give to have excellent results. 


I'm writing this book, from these experiences, to lead you faster to first exploratory testing *the verb* but then also to exploratory testing *the noun*








Wednesday, October 28, 2020

Staying with Problems Longer

 One of my favorite quotes is:

It's not that I'm so smart, I just stay with the problems longer. --Albert Einstein

This idea has been central over the years as I have worked on understanding what it is that we call "Exploratory Testing". It's not a haphazard smart clicking thing, but it is learning deeply while testing.

We do our best work in Exploratory Testing when we stay with the problems longer. When we follow through things others let go, and investigate. When we dig in deeper until we understand what those symptoms mean for our projects. 

Luck - or rather serendipity, lucky accident - does play a role in testing. As we spend time focusing on coverage, we give our systems a chance of revealing information we did not plan for. Sometimes, but rarely, that happens as soon as we get started. Sometimes, but rarely, that information is so valuable it is worth our entire year's salary. That's a myth, and this post is motivated by a colleague who is using that myth to sell a training course I now categorize as snake oil. 

In a profession where we seek what is real and what is not through empirical evidence, falling into promises like "I teach you how to find a bug worth your entire year's salary in an hour" are just as bad as the excessive promises tool vendors do on test automation. 

In words of a super-successful golfer:

The more I practice the luckier I get -- Arnold Palmer 

Spend time to be lucky in Exploratory Testing. A gamble-based approach to exploratory testing is off and encourages the idea that thoroughness isn't the goal. Serendipity is real, and it becomes real by staying with the application longer and using test automation to extend your reach in all kinds of directions. 

And yes, I too have found a significant bug in an hour after joining a project. That says NOTHING about me and my abilities as an exploratory tester, and a lot about the project. Let's change projects so that they make us need to be a lot less heroic and a lot more investigative. 


Tuesday, October 27, 2020

The Vuln, the Breach and the Ransom

 A week ago, I was reading the news in Finland to learn that a major psychotherapy service provider, Vastaamo, had received a ransom note from someone in possession with their patient database. I could guess I would soon find myself a victim, and a few days later on Thursday, that's exactly what I was told. The event unfolded some more when on Saturday I, like apparently tens of thousands of others, received a marketing-style personalized ransom email asking me to pay. 

I'm lucky - whatever discussions I have had there have already seen the social media and just filing in a crime report on the ransom was a no-brainer. 

My first reaction was to be upset with Vastaamo for doing a crappy job protecting our information, as the criminal's messages implied that the reason they had the information was that the database was left online, with root:root access credentials. An open door, yes, but not an excuse for stealing something private, and even less of an excuse to blackmail folks. 

My fascination for this case comes from being a professional tester. With the 25 years of working, I have been a part of reporting and getting fixed hundreds, most likely thousands of vulnerabilities. Even the problem of weak password for relevant data in production, there's been more of that than I care to count. There's been protecting admin interfaces by thinking a secret address that we only know would protect it. There's been great plans of security controls, that turn out to be just plans but not turned reality. Well planned is not half done, it isn't even started. 

Bad protection shouldn't happen, and I would love to say you need folks like myself, aware of the issues around security and keen to follow through to practice to not leak through something this stupid. I even made the claim that this is level of protection for *health records* is against the law as I filed that complaint last Thursday on Vastaamo. But bad protection happens, and all it takes is, like the now-fired-ceo of Vastaamo claims, a human error. And perhaps, deprioritizing work that would cover at least the basics of security controls. 

As time passes and news unfolds, my focus turned on my annoyance on how the news reports on when the company knows their data was stolen. 

We need to separate, on a timeline, a few concepts. 

  • The Vulnerability is the open or insufficiently locked door 
  • The Breach is the moment someone walked through that door
  • The Ransom is the moment when they used the data illegally in their possession for further steps



Separating these three, we can collect statements of what we know.
  • The vulnerability was fixed in March 2019 and this is how they know data after that haven't leaked (for this particular incident)
  • You can't fix a thing you don't know is broken. So they know of the vulnerability even if they don't know of the breach. 
  • The ransom requests were reported to police in September 2019 and this is how we know when the company knew they had been breached for a fact. 
  • The breach could have happened any time the vulnerability was there and we have been given two points of when the data was accessed. We are told the latter is something the company figured out in their security audit activities (which lead to fixing the vulnerability). We don't know if the company knew of November 2018 breach before the ransom request. 
The timeline of these will become very important for the CEO of Vastaamo, as the new owner is interested in whether they were sold a company knowing the breach. But knowing a vulnerability is not knowing a breach. They are separate and we just don't know yet. 

With the hundreds or thousands of vulnerabilities I have been part of, the number where I am aware of a breach is less than one hands fingers. Sometimes we don't know because knowing requires going back and analyzing. Sometimes we don't have the data to analyze, but more often we end up looking into future. Similarly, with the hundreds or thousands of vulnerabilities,  I can still cope with my fingers on calculating how many times we have told we had a vulnerability that we fixed to our customers. 

We find vulnerabilities through analysis and testing.
We learn of breaches through logs monitoring use and contacts. 
We tell of vulnerabilities to customers when we have identified they were almost certainly breached, and most certainly now protected. 
We fix vulnerabilities in secret to not invite more breaches. 
 
I don't like that the news are passing such one-sided perspective on an upcoming court case on the Vastaamo CEO that will define timing of the vuln, the breach and the ransom. Knowing one is not knowing the other. 



Monday, October 19, 2020

Fix by symptoms, fix by causes

Working with a new tester is both refreshing and inspiring as they go through things I've been through and where my seasoned nature makes things different. One of those things is related to how we communicate on bugs. 

The team took upon themselves a change where some information currently stored in a local database in our own objects now move to being stored in a 3rd party system, with a well-defined REST API to get to the information. The developers would do their bit, and as their flow of pull requests was dying down, the new tester raised the question: is this supposed to be ready? Having already tested it throughout the steps, they knew of many things that didn't quite work, and having had conversations on those with the developers, they expected the developers did too. Yet, the state of functionality was not where done would reside, and the tester was confused.

With the confirmation of the problems being unknown, the developer insisted on writing separate bug reports on every symptom. And there were quite a few. What had happened is that while the previous local object to REST API had included read-only information after the change locally, no one remembered to discuss that this one was asymmetric and didn't. There needed to be both the information from the REST API and the local object, and as when you confuse a principle like this, data was getting lost in quite confusing ways. 

Discussing the symptoms the tester was seeing made us all feel a little puzzled without the connecting story of why the change was failing. And with failing change and many reports, the disconnect in communication between us all was clear. 

As soon as I worked together with the new tester, we figured out what was wrong. And the new tester turned their experience to a metaphor.

It was like we had been building a car, and as they sat on the driver seat, they could experience that they no longer saw through the windshield  like before. They could experience the steering wheel was hard to use, and that reaching pedals was getting next to impossible. They just could not tell why they experienced all this.  They could not tell it was because the seat position had shifted. Not because the change was to change the seat position - the change was to install new car mats on the floor. They were describing symptoms, creating bug reports of each symptom. And the worst part was that they were getting fixes for their symptoms, not yet knowing the reason they had to see those symptoms and a nagging feeling that the fixes they were getting may not address why they are seeing those symptoms.

When they need the chair moved back, they instead get a wider windshield, a pedals adapter and had to live with the bad experience with the steering wheel. 

As testers, we are taught to describe what we observe, the symptoms. Yet the way we are able to frame those symptoms triggers memories and ideas of what the right, proper solution is with the developers. And my new colleague is very right that them doing their job on reporting is not sufficient. They can, and should, work on setting the relationship right so that instead of fixing by symptoms, we fix by causes. 

Wednesday, October 7, 2020

The Difference of a Test Idea and a Test Case

Year after year, organization after organization I join in anticipation of not having to see test cases other than those that are automated, and through continuous execution guide the process in keeping themselves up to date. 

Yet year after year, organization after organization I learn people still write test cases. Those things where there's a title and steps. Those that set out a flow through the application that needs to be verified, and steps you can choose to disobey because you are not a robot. 

The way I look at it, ideas are cheap, and we don't care much on how well they are documented. The ideas on post-it notes I find often difficult to decipher in a month, but they are critical notes when I'm learning to structure things in a way that I can recall later. Test cases are something we may want to keep around for later, they are more than ideas. They have structure that supports executing them, even if they were checklist-like. They often include steps and an idea of an order of execution. Test cases are better considered output of testing (and automated!) than an input to testing.

The thing is, some of our worst experiences will never get published, because we can't talk about them while we are in the middle of them. Inspired by conversations today, I go back to an experience I could not talk about back when it happened, but I can today, with examples.

I was working with a product / consulting company, and the consulting side I was representing was doing quite well - so well in fact that it was hard to make space for any of our testing experts to do testing on our own product. The consulting paid so much better. The management pulled one of us consultant out to test release 1, another to test release 2 and so forth. Eventually, I was next in line. 

Thinking about it 20 years later, it is hilarious to realize I already was presenting as the resident testing expert. Going in to test the product, I intended to do well - don't we all. I asked what the ones before me had done, and was pointed to a test management tool, by the proud colleagues who had made sure they documented their testing. 

These are actual samples of what I received. 


These were considered test cases. They are a lot worse than some of the better test cases I have seen over the years, but even the better ones come with the universal stigma of not being useful for good testing. 

These test cases were awful. They still are awful. Time did not make them better, only worse. They were step by step instructions, with a lot of effort in faking testing by describing elaborately the same steps just so that the tester using them would know they've for example moved a box up, down, right and left. They had magic values defined that completely miss the point of why those values were selected, but I can only guess the choice of data in these reflected naming by concept, not naming for likelihood of finding problems or even pointing at ideas where problems might be. 

On my round, the first thing I did was throw these away. These were done by my respected colleagues, and they did the best work they thought of with the situation they had at hand. I could only hope they reflected test cases, not the testing that was done. 

I moved testing to exploratory. No more test cases. Closest we would get was expecting a checklist of features and ideas after it was all learned and structured through multiple rounds of rehearsing through actual testing. 

While what I run into in organizations nowadays is generally not this bad, it isn't much better either. I've shown it again and again that dropping test cases has improved testing. The controlled experiment of four weeks where we used pre-designed cases for two finding zero problems and using freeform exploratory testing with prepared test data for other two finding all the problems there was to report from that acceptance test. The removal of 39 pages with 46 "test cases" where 3 pieces of information were something I did not know joining a new company on week 1. The others where I did not do a public presentation I can go dig out years later for numbers I was sharing. 

I wish the world was ready for good testing, but it still isn't. Automating and working through ideas of better and good seem like our best hope. And I'm delighted that test automation is merely a smart way of documenting your testing.