Wednesday, October 28, 2020

Staying with Problems Longer

 One of my favorite quotes is:

It's not that I'm so smart, I just stay with the problems longer. --Albert Einstein

This idea has been central over the years as I have worked on understanding what it is that we call "Exploratory Testing". It's not a haphazard smart clicking thing, but it is learning deeply while testing.

We do our best work in Exploratory Testing when we stay with the problems longer. When we follow through things others let go, and investigate. When we dig in deeper until we understand what those symptoms mean for our projects. 

Luck - or rather serendipity, lucky accident - does play a role in testing. As we spend time focusing on coverage, we give our systems a chance of revealing information we did not plan for. Sometimes, but rarely, that happens as soon as we get started. Sometimes, but rarely, that information is so valuable it is worth our entire year's salary. That's a myth, and this post is motivated by a colleague who is using that myth to sell a training course I now categorize as snake oil. 

In a profession where we seek what is real and what is not through empirical evidence, falling into promises like "I teach you how to find a bug worth your entire year's salary in an hour" are just as bad as the excessive promises tool vendors do on test automation. 

In words of a super-successful golfer:

The more I practice the luckier I get -- Arnold Palmer 

Spend time to be lucky in Exploratory Testing. A gamble-based approach to exploratory testing is off and encourages the idea that thoroughness isn't the goal. Serendipity is real, and it becomes real by staying with the application longer and using test automation to extend your reach in all kinds of directions. 

And yes, I too have found a significant bug in an hour after joining a project. That says NOTHING about me and my abilities as an exploratory tester, and a lot about the project. Let's change projects so that they make us need to be a lot less heroic and a lot more investigative. 


Tuesday, October 27, 2020

The Vuln, the Breach and the Ransom

 A week ago, I was reading the news in Finland to learn that a major psychotherapy service provider, Vastaamo, had received a ransom note from someone in possession with their patient database. I could guess I would soon find myself a victim, and a few days later on Thursday, that's exactly what I was told. The event unfolded some more when on Saturday I, like apparently tens of thousands of others, received a marketing-style personalized ransom email asking me to pay. 

I'm lucky - whatever discussions I have had there have already seen the social media and just filing in a crime report on the ransom was a no-brainer. 

My first reaction was to be upset with Vastaamo for doing a crappy job protecting our information, as the criminal's messages implied that the reason they had the information was that the database was left online, with root:root access credentials. An open door, yes, but not an excuse for stealing something private, and even less of an excuse to blackmail folks. 

My fascination for this case comes from being a professional tester. With the 25 years of working, I have been a part of reporting and getting fixed hundreds, most likely thousands of vulnerabilities. Even the problem of weak password for relevant data in production, there's been more of that than I care to count. There's been protecting admin interfaces by thinking a secret address that we only know would protect it. There's been great plans of security controls, that turn out to be just plans but not turned reality. Well planned is not half done, it isn't even started. 

Bad protection shouldn't happen, and I would love to say you need folks like myself, aware of the issues around security and keen to follow through to practice to not leak through something this stupid. I even made the claim that this is level of protection for *health records* is against the law as I filed that complaint last Thursday on Vastaamo. But bad protection happens, and all it takes is, like the now-fired-ceo of Vastaamo claims, a human error. And perhaps, deprioritizing work that would cover at least the basics of security controls. 

As time passes and news unfolds, my focus turned on my annoyance on how the news reports on when the company knows their data was stolen. 

We need to separate, on a timeline, a few concepts. 

  • The Vulnerability is the open or insufficiently locked door 
  • The Breach is the moment someone walked through that door
  • The Ransom is the moment when they used the data illegally in their possession for further steps



Separating these three, we can collect statements of what we know.
  • The vulnerability was fixed in March 2019 and this is how they know data after that haven't leaked (for this particular incident)
  • You can't fix a thing you don't know is broken. So they know of the vulnerability even if they don't know of the breach. 
  • The ransom requests were reported to police in September 2019 and this is how we know when the company knew they had been breached for a fact. 
  • The breach could have happened any time the vulnerability was there and we have been given two points of when the data was accessed. We are told the latter is something the company figured out in their security audit activities (which lead to fixing the vulnerability). We don't know if the company knew of November 2018 breach before the ransom request. 
The timeline of these will become very important for the CEO of Vastaamo, as the new owner is interested in whether they were sold a company knowing the breach. But knowing a vulnerability is not knowing a breach. They are separate and we just don't know yet. 

With the hundreds or thousands of vulnerabilities I have been part of, the number where I am aware of a breach is less than one hands fingers. Sometimes we don't know because knowing requires going back and analyzing. Sometimes we don't have the data to analyze, but more often we end up looking into future. Similarly, with the hundreds or thousands of vulnerabilities,  I can still cope with my fingers on calculating how many times we have told we had a vulnerability that we fixed to our customers. 

We find vulnerabilities through analysis and testing.
We learn of breaches through logs monitoring use and contacts. 
We tell of vulnerabilities to customers when we have identified they were almost certainly breached, and most certainly now protected. 
We fix vulnerabilities in secret to not invite more breaches. 
 
I don't like that the news are passing such one-sided perspective on an upcoming court case on the Vastaamo CEO that will define timing of the vuln, the breach and the ransom. Knowing one is not knowing the other. 



Monday, October 19, 2020

Fix by symptoms, fix by causes

Working with a new tester is both refreshing and inspiring as they go through things I've been through and where my seasoned nature makes things different. One of those things is related to how we communicate on bugs. 

The team took upon themselves a change where some information currently stored in a local database in our own objects now move to being stored in a 3rd party system, with a well-defined REST API to get to the information. The developers would do their bit, and as their flow of pull requests was dying down, the new tester raised the question: is this supposed to be ready? Having already tested it throughout the steps, they knew of many things that didn't quite work, and having had conversations on those with the developers, they expected the developers did too. Yet, the state of functionality was not where done would reside, and the tester was confused.

With the confirmation of the problems being unknown, the developer insisted on writing separate bug reports on every symptom. And there were quite a few. What had happened is that while the previous local object to REST API had included read-only information after the change locally, no one remembered to discuss that this one was asymmetric and didn't. There needed to be both the information from the REST API and the local object, and as when you confuse a principle like this, data was getting lost in quite confusing ways. 

Discussing the symptoms the tester was seeing made us all feel a little puzzled without the connecting story of why the change was failing. And with failing change and many reports, the disconnect in communication between us all was clear. 

As soon as I worked together with the new tester, we figured out what was wrong. And the new tester turned their experience to a metaphor.

It was like we had been building a car, and as they sat on the driver seat, they could experience that they no longer saw through the windshield  like before. They could experience the steering wheel was hard to use, and that reaching pedals was getting next to impossible. They just could not tell why they experienced all this.  They could not tell it was because the seat position had shifted. Not because the change was to change the seat position - the change was to install new car mats on the floor. They were describing symptoms, creating bug reports of each symptom. And the worst part was that they were getting fixes for their symptoms, not yet knowing the reason they had to see those symptoms and a nagging feeling that the fixes they were getting may not address why they are seeing those symptoms.

When they need the chair moved back, they instead get a wider windshield, a pedals adapter and had to live with the bad experience with the steering wheel. 

As testers, we are taught to describe what we observe, the symptoms. Yet the way we are able to frame those symptoms triggers memories and ideas of what the right, proper solution is with the developers. And my new colleague is very right that them doing their job on reporting is not sufficient. They can, and should, work on setting the relationship right so that instead of fixing by symptoms, we fix by causes. 

Wednesday, October 7, 2020

The Difference of a Test Idea and a Test Case

Year after year, organization after organization I join in anticipation of not having to see test cases other than those that are automated, and through continuous execution guide the process in keeping themselves up to date. 

Yet year after year, organization after organization I learn people still write test cases. Those things where there's a title and steps. Those that set out a flow through the application that needs to be verified, and steps you can choose to disobey because you are not a robot. 

The way I look at it, ideas are cheap, and we don't care much on how well they are documented. The ideas on post-it notes I find often difficult to decipher in a month, but they are critical notes when I'm learning to structure things in a way that I can recall later. Test cases are something we may want to keep around for later, they are more than ideas. They have structure that supports executing them, even if they were checklist-like. They often include steps and an idea of an order of execution. Test cases are better considered output of testing (and automated!) than an input to testing.

The thing is, some of our worst experiences will never get published, because we can't talk about them while we are in the middle of them. Inspired by conversations today, I go back to an experience I could not talk about back when it happened, but I can today, with examples.

I was working with a product / consulting company, and the consulting side I was representing was doing quite well - so well in fact that it was hard to make space for any of our testing experts to do testing on our own product. The consulting paid so much better. The management pulled one of us consultant out to test release 1, another to test release 2 and so forth. Eventually, I was next in line. 

Thinking about it 20 years later, it is hilarious to realize I already was presenting as the resident testing expert. Going in to test the product, I intended to do well - don't we all. I asked what the ones before me had done, and was pointed to a test management tool, by the proud colleagues who had made sure they documented their testing. 

These are actual samples of what I received. 


These were considered test cases. They are a lot worse than some of the better test cases I have seen over the years, but even the better ones come with the universal stigma of not being useful for good testing. 

These test cases were awful. They still are awful. Time did not make them better, only worse. They were step by step instructions, with a lot of effort in faking testing by describing elaborately the same steps just so that the tester using them would know they've for example moved a box up, down, right and left. They had magic values defined that completely miss the point of why those values were selected, but I can only guess the choice of data in these reflected naming by concept, not naming for likelihood of finding problems or even pointing at ideas where problems might be. 

On my round, the first thing I did was throw these away. These were done by my respected colleagues, and they did the best work they thought of with the situation they had at hand. I could only hope they reflected test cases, not the testing that was done. 

I moved testing to exploratory. No more test cases. Closest we would get was expecting a checklist of features and ideas after it was all learned and structured through multiple rounds of rehearsing through actual testing. 

While what I run into in organizations nowadays is generally not this bad, it isn't much better either. I've shown it again and again that dropping test cases has improved testing. The controlled experiment of four weeks where we used pre-designed cases for two finding zero problems and using freeform exploratory testing with prepared test data for other two finding all the problems there was to report from that acceptance test. The removal of 39 pages with 46 "test cases" where 3 pieces of information were something I did not know joining a new company on week 1. The others where I did not do a public presentation I can go dig out years later for numbers I was sharing. 

I wish the world was ready for good testing, but it still isn't. Automating and working through ideas of better and good seem like our best hope. And I'm delighted that test automation is merely a smart way of documenting your testing.