Tuesday, August 14, 2018

Options that Expire

When you're exploratory testing, the world is an open book. You get to write it. You consider the audience. You consider their requirements. But whatever those are, what matters is your actions and choices.

I've been thinking about this a lot recently, observing some problems that people have with exploratory testing.

There is a group of people who don't know at all what to do with an open book. If not given some constraints, they feel paralyzed. They need a support system, like a recipe to get started. The recipe could be to start with specifications. It could be to start with using the system like an end user. It could be using the system by inserting testing values of presumed impact into all the fields you can see. It could be a feature tour. It could really be anything, but this group of people want a limited set of options.

As we start working towards more full form exploratory testing, I find we are often debriefing to discuss what I could do to start. What are my options? What is possible, necessary, even right? There is no absolute answer for that question, but a seemingly endless list of ways to approach testing, and intertwining them creates a dynamic that is hard if not impossible to describe.

I find myself talking about the concept of options that expire. Like when you're testing, there is only one time when you know nothing about the software - before you started any work on it. The only time to truly test with those eyes is then. That option expires as soon as you work with the application, it is no longer available. What do you then do with that rare moment?

My rule is: try different things different times. Sometimes start with the specification. Sometimes start with talking to the dev. Sometimes start with just using it. Sometimes pay attention to how it could work. Sometimes pay attention on how it could fail. Stop then to think about what made this time around different. If nothing else, the intentional change makes you more alert in your exploration.

I've observed other people's rules to be different. Some people always start with the dev to cut through the unnecessary into the developer intent. That is not a worse way or better way, but a different way. Eventually what matters for better or worse is when you stop. Do the options expiring fool you to believe things are different than they are? Make you blind to *relevant* feedback?

Saturday, August 4, 2018

Test-Driven Documentation

I remember a day at work from over a decade ago. I was working with many lovely colleagues at F-Secure (where I returned two years ago after being away for ten years). The place was full of such excitement over all kinds of ideas, and not only leaving things on idea level but trying things out. 

The person I remember is Marko Komssi, a bit of a researcher type. We were both figuring out stuff back then in Quality Engineering roles, and he was super excited and sharing about a piece he had looked into long before I joined. He had come to realize that in time before agile, if you created rules to support your document review process, top 3 rules found find a significant majority of the issues.

Excitement of ideas was catchy, and I applied this in many of my projects. I created tailored rules for requirements documents in different projects based on deeper understanding of quality criteria on that particular project, and the rule creation alone helped us understand what would be important. I created tailored rules for test plan documents, and they were helpful in writing project specific plans instead of filling in templates. 

Over time, it evolved into a concept of Test-Driven Documentation. I would always start writing of a relevant document from creating rules I could check it against.
The reason I started writing about this today is that I realized it has, in a decade, become an integral part of how I write documentation at work: rules first. I stop to think about what would show me that I succeeded with what I intended, and instead of writing long lists of rules, I find the top-3. 

Many of the different versions I've written are somewhere on my messy hard drive, and I need to find them to share them. 

Meanwhile, you could just try it for your own documentation needs. What are 3 main rules you review a user story with? If you find your rules to be:
  • Grammar: Is the language without typos?
  • Format: Does it tell all the As I want so that ?
  • Completeness: Does it say everything that you are aware that is in and out? 
You might want to reconsider what rules make sense using. Those were a quick documentation of discussion starters I find wasteful in day to day work. 

Tests first, documentation then. Clarify your intent. 

Friday, August 3, 2018

Framing an experiment for recruiting

I was having a chat with my team folks, and we talked about how hard it is to feel connected with people you've never met. We had new colleagues in a remote location, and we were just about to head out to hang out and work together to make sure we feel connected. My colleagues jump into expressing the faceless nature of calls, triggering me to share a story on what I did this summer.

During my month of vacation, I talked to people over video, each call 15 minutes, about stuff they're into in testing. I learned about 130 topics, each call starting with the assumption that everyone is worthwhile yet we need some way of choosing which 12 end up on stage. Majority of them I talked face to face, and I know from past years that for me it feels as if we really met when we finally have a chance to physically be in the same space.

This is European Testing Conference Call for Collaboration process, and I cherish every single discussion the people have volunteered to have with me. These are people and everyone is valuable. I seek stories, good delivery and delivery I can help become good because the stories are worth it for the audience I design for.

This story lead to to say that I want to try doing our recruiting differently. I want to change from reading a CV and deciding if that person is worth talking to into deciding that if they found us worthwhile of their time, they deserve 15 minutes of my life.

I've automated the scheduling part, so my time investment with experience of 2 years and 300 calls is that I know how to do this without it eating away my time. All I need is that 15 minutes. I can also automate the rules of when I'm available for having a discussion and leave scheduling for the person I will talk to. 

So with the 15 minutes face time, what will we do? With European Testing Conference we talk about talk ideas and contents of the talks. With recruitment process, we test for testers, and we code for programmers. And my teams seem to be up for the experiment!

My working theory is that we may be losing access to great testers by prematurely leaving them out. Simultaneously, we may be wasting a lot of effort in discussing if they should make it further in out processes. We can turn this around and see what happens.

Looks like there is a QE position I could start applying this next week! And a dev position we can apply it in about a month.

Meeting great people for 15 minutes is always worth the time. And we are all great - in our own different unique ways.

Wednesday, August 1, 2018

Seeing what is Right when Everyone Seems Happy with Wrong

I'm uncomfortable. I've been postponing taking this topic up in a blog post, convinced that there must be something I'm missing. But it may be that there is something I'm seeing that many others are not seeing.

A few months ago, I volunteered at work to try out crowdtesting on my product. It wasn't the first try of crowdsourcing, quite the contrary as I learned we were a happy user of one of those services for some other products.

The concern I had surprised everyone, providing a perspective no one else seemed to think about. I was concerned if using a service like this would fit my ideas of what is ethical. I was not focused on the technicalities and value and usefulness of the service but whether using the service was right. It isn't obviously wrong because it is legal. Let's look a little more where I come from.

I live in Finland, where we have this very strongly held idea that work needs to be paid for. Simultaneously, we are a promised land of non-profit organizations that run on volunteer work having  huge numbers of organizations like that in relation to number of people. But the overall structure of how things are set up is that you don't have free labor in companies.

The unemployed and the trainees get paid for the work they are doing. And this forces companies to be on good behavior in the social structures and not leech of the less fortunate.

So if my company in general wants someone to do testing, they have three options.

1) Customers Test
They mask it as "no need to test" and force their end users to test, and pay for structures that enable them to figure out what a normal user is actually complaining on, and structures for fixing things fast when the problems hit someone who not only raises their voice but walks out with a relevant amount of money.

2) Someone in their employee list tests
They pay someone for some of the testing, and some companies pay testers to get more of the whole of testing done. If you tested half of what you needed, you still tested. And this is the narrative that makes the current programmers test it all trend discussions so difficult in practice. There's testing and there's testing. It takes a bit of understanding to tell those two apart.

3) Someone in a service provider organization tests
They pay for a vendor for doing some testing. The vendor is an abstraction, a bubble where you conveniently allocate some of your responsibilities, and in many cases you can choose to close your eyes at the interface level.

Crowdtesting belongs to the third option, and relies in my view on closing your eyes at the interface level.  Crowdtesting is the idea of paying a company for a service of them finding you a crowd. They find the crowd by paying those testers, but I'm not convinced that the models of paying represent what I would consider fair, right and ethical.

So, I pay a legitimate company 5000 euros for them to do some testing we agree on under the label "crowdtesting". Yay! Is that really enough of thinking from my part? You get 30 testers with that, so cheap! (numbers are in scale, but not actuals). They even promise most of them will be in Finland and other Nordic countries. If your alarm bells aren't ringing, you are not thinking.

The more traditional companies producing things like coffee or clothes know painfully well it isn't. You really don't want your fashion brand be associated with child labor, inhumane working conditions or anything other nasty like that. Surely you can save loads of money using companies where their delivery chain hides the unethical nature of how the service is created, but you risk a reputation hits. Because someone hopefully cares.

Crowdtesting companies are not in the business of child labor, at least to my knowledge. But they are in the business of forcing my people (testers) away from being paid for their work, to be paid for their results. And with results being bugs and checkmarks in test cases, it's actively generating a place where folks in these programs are not necessarily paid for the work they do.

The way the monthly fee is set up for the customer makes this worse. There's a limit on how much of these paid checkmarks you can allocate a month, but you're told you're allowed to ask for unlimited exploratory testing  on top of that. The financial downside for the people doing testing in the low end of this power dynamic is that they are only paid for bugs customer accepts.

Many people seem really happy with doing testing in these schemes.

The ones promoted to the middle management layer get paid for their hours, leaving even less of the financial cake for the low end of the power dynamic. But why would they care, they get paid.

The ones privileged enough to not really need a job that pays you a salary get paid whatever, but since the money never mattered, why should they care? This is a hobby on the side anyway.

The ones living in cheaper countries getting paid the same amount as the testers from the Nordics may actually make a decent amount of money in finding problems, and might not even have a choice.

The really good ones who always find problems can also get paid, as long as the effort of finding something relevant isn't getting too high.

But the ethical aspect that I look at is local. The low end of the power dynamic are the testers kicked out of their projects who no longer need them, but actually do. They just no longer want to pay for all of their work and hide this fact in the supply chain.

Maybe I should just think more global. I don't know. But I am uncomfortable enough to step away. I don't want to do this.

With the same investment, I can get someone who works close to the team and provides more value. Without being exploited. With possibilities of learning and growing. Being appreciated.

And I say this looking from a higher end of that dynamic. I'm not losing my job in this shift.

I don't like crowdsourcing. Do you?

My first job and what has changed since

Falling for testing is a story I've shared multiple times in various places. It's not like I intended to become a tester. I had studied the Greek language on the side of high school, and as I was on my first years into University and Computer Science, someone put things together. They talked me into just trying the entry exercise comparing English and Finnish versions of Wordpad with seeded bugs, writing step by step problems reports. And when offered a job on the side, I just did not know how to say no.

Thinking back to that time, I had no clue what I was getting myself into. I did not know that falling for testing was like falling in love.

So I ended up testing Microsoft Access, the database-ish extension to the Office family - in Greek language. As new testers, we were handed out test cases used across multiple languages, and I think my office had four languages to test, Finnish being one of them. Looking back, I went to work whenever I had promised, did whatever was assigned for me, and probably did a decent job in following orders including tracking the work and diligently comparing the English and the Greek versions to identify if there were functional differences to log as bugs. I remember the fear of "QA" which back then meant that some of the senior testers at Microsoft would sample some of our test cases and see if they found problems we missed.

I had a very nice test manager, and as I was generally interested in how programs work, I was allowed to do something called "exploratory testing". I had absolutely no guidance on how to do it. I was just told how many hours I could use on doing whatever I wanted with the application.

Thinking back, I found myself stuck a lot. I had no strategies of how to approach it. I had a database project in mind, so I was basically implementing that stuff, creating some screens. I wasn't particularly diligent in my comparisons to the English version here like with the test cases. I had no ideas of how to think around coverage. With the information I have today, I know I did a bad job. I found no problems. I was handed a blank check and for all I know, I could have used that for just sitting at the coffee table drinking something other than coffee I never learned to enjoy.

Nowadays, if I'm handed a blank check like that (and I regularly am), I pay attention to value that investment provides. I create coverage outlines helping me make sense of what I have covered and realized I could cover. When I feel stuck, I decide on something I will do. I often find myself starting with tutorials or technical help documentation. I select something and figure it out. All of these are things no one told me to do back then.

The pivotal moment between then and now is the time I first time entered a project that had no test cases unless I created some. The change from a passive user of test cases to an active explorer is what sealed the love I still feel for testing.

The book I'm working on (https://leanpub.com/exploratorytesting/) hopes to capture some of the things I wish someone would have taught me when I was new. It builds on the basics to take people closer to testing I do now. That's the vision. Writing it down felt urgent enough to get up in the middle of the night.

Testing is the thing, not testers. 

Tuesday, July 31, 2018

Folks tell me testing happens after implementation

As I'm slowly orienting myself to the shared battles across more and more roles, throughout the organization as I return to office from a long and relaxing vacation, I'm thinking about something I was heavily drawing on whiteboards all around the building before I left. 

The image is an adaptation based on my memory of something I know I've learned with Ari Tanninen, and don't probably do justice to the original illustration. But the version I have drawn multiple times helps discuss an idea that is very clear to me and seems hard for others. In the beginning of the cycle, testing feeds development and in the end of the cycle, development feeds testing. 

There's basically two very different spaces in the way from idea to delivery. There's the space that folks like myself, testers, occupy together with business people - the opportunity space. There's numerous things I've participated in saying goodbye to with testing them. And it's awesome, because in the opportunity space ideas are cheap. You can play with many. It's a funnel to choose to one to invest some on, and not all ideas get through.

When we've chosen an idea, that's when the world most development teams look into starts - refining the idea, collecting requirements, minimizing it to something that we can deliver a good first version of. Requirements are not a start of things, but more like a handoff between the opportunity space and the implementation space.

Implementation space is where we turn that idea into an application, a feature, a product. Ways to deal with things there are more like a pipeline - with something in it, nothing else gets through. We need to focus, collaborate, pay attention. And we don't want to block the pipeline for long, because when it is focused on delivering something, the other great ideas we might be coming up with won't fit it.

A lot of time we find seeds of conflict in not understanding the difference of cheap ideas we can toy with in the opportunity space and the selected ideas turning expensive as they enter the implementation space. Understanding both exist, and play with very different rules seems to mediate some of that conflict.

As a lead tester (with a lead developer by my side), we are invited to spend as much of our efforts in the opportunity space as we deem useful. It's all one big collaboration.

Looking forward to the agreed "Let's start our next feature with a marketing text writing, together". Dynamics and orders of things are meant to be played with, for fun and profit. 

Stop thinking like a tester

I'm very much an advocate for exploratory testing, and yet I find myself seeking kind of what Marlena Compton seems to be doing in space of Extreme Programming and Pairing - seeking the practicality, inclusion and the voices that keep shouted down by the One Truth.

Whenever I find people doing good testing (including automation), I find exploratory testing plays a part. The projects lacking exploratory testing are ones that I can break in two hours.

So clearly the focus and techniques I bring into a project, as I apply them are something special.

In this particular project, some of the observations I shared lead to immediate fixes and easing things for whoever came after me. Some of the fixes (documentation) were done in mid-term timeframe, and looking at the documentation now, I don't want to test it, I want to write it better. And some of the fixes remained promise ware (making the API discoverable, which it isn't and the message was well delivered by making a group of people with relevant skills fail miserably with its use).

So sometimes I've found myself saying that I think like a tester. I do this stuff that testers do. It's not manual, so it must be the way I think, as a tester.

I've seen same / similar curiosity and relentless will to believe that things can be different in other roles too. My favorite group of like minded peers is programming architects, and I get endless joy in those conversations where I feel like I'm with my people.

So I came to a conclusion. Saying that we teach how to think like a tester is like brute forcing your thinking patterns on others. Are you sure the way the people think wouldn't actually improve the way you're building things, if you carefully made sure everyone in the teams is celebrated for their way of thinking.

I sum this up as this.
Be your own, true, unique self and help others do that too. Growing is a thing, but while growing, be careful to not force the good those people already have in the hiding.

It took me so much time to realize what things I do because they are expected of me and my kind, and what I do because I believe it is the right thing for me to do. Appreciating differences should be a thing.  Think your way.