Thursday, August 30, 2018

The Rewrite Culture and Programming Ecosystems

With 25 years in testing and close collaboration with many software teams working with different languages, there is a particular aspect to software development that is close to my polyglot programmer heart. I love observing programming ecosystems.

The programming ecosystem is a construct around a language that shows in many ways. We see some of the aspects in jokes of how different programming languages approach things. I see it for now as a combination of at least the following:
  • programming language
  • preferred IDE
  • 3rd party tooling and use of libraries
  • community values
  • culture of what you can do (without getting in too much trouble)
Moving between ecosystems makes you pay attention to how things have shifted and moved. The bumble bees moving between ecosystems look to be advancing things within the ecosystems, trying to bring better parts from one to another.

I'm thinking of this today, because someone proclaimed yet another rewrite. They will scrap all there was because figuring it out is too difficult and it is ugly code (tm) and write it again beautiful. I just think the definition of beautiful here is "I understand it", not "We all understand it". And my tester spidey sense is tingling, because every single rewrite I've ever seen results in losing half of the features that were not evident from the code,  but held in the tester memory bank of purposes why the software exists and flows for users it is supposed to be created for.

I realized rewrite culture is part of the C++ ecosystem a little more than some of the other language ecosystems. When in Crafter community we have the discussions of generally favoring refactoring over rewriting (gradually cleaning up), suggesting that in the C++ ecosystem feels like I'm suggesting something unspoken. Knowing how much the tooling differs for example for the C# ecosystem, this all  makes more sense. Refactoring C++ is a different pain. And it makes sense to move the pain in this ecosystem more towards those who end up testing it with a lot of contextual information.

And please, don't tell me that "not all C++ developers". Enough of them to consider it a common view. And not just in this particular organization.

We become what we hang out with unless we actively exert energy into learning new patterns that contradict what comes easy.

I love programming. And in particular, I love the programming ecosystems and how it talks to me about the problems I end up finding (through paying attention) as tester.

Seeing Negative Space

Have you ever heard the saying: "There is no I in the TEAM"? And the proper response to it: "Yes there is, it is hiding in the A-holes". This is an illustration of the idea of negative space having a meaning, and that with the right font, you can very clearly see "i" inside the A.

I'm thinking this because of something I just tested, that made me realize I see negative space a lot as a tester.

The feature of the day was a new setting that I wanted to get my hands on. Instead of doing what a regular user would do and look at only settings that were revealed, I went under the hood to look at them all. I found the one I was looking for, set it False as I had intended and watched the application behavior not change. I felt disappointed. I was missing something.

I opened a project in Stash that is the master of all things settings. I was part of pushing for a central repo with documentation more than a year ago, and had the expectation that I might find my answers there on what I was missing. I found the setting in question with a vague hint saying it would depend on a mode of some sort, which I deducted to mean that it must be another setting. I asked, and got the name of the setting I needed with an obvious name of "feature_enabled". I wasn't happy with just knowing what I needed to set, but kept trying to find this in the master of all things settings, only to hear that since we are using this one is way 1 out of 4, I could not expect to find it here. I just need to "know it". And that the backend system encodes this knowledge, and I would just be better off if I would use the system end to end.

Instead of obeying, I worked on my model of the settings system. There's the two things that are visible, and the two things that are invisible. All four are different in how we use them.

Finding the invisible and modeling on it, I found something relevant.

It's not only what there is that you need to track, you also need to track what is there but isn't visible. Lack of something is just as relevant than presence of something when you're testing.


Tuesday, August 14, 2018

Options that Expire

When you're exploratory testing, the world is an open book. You get to write it. You consider the audience. You consider their requirements. But whatever those are, what matters is your actions and choices.

I've been thinking about this a lot recently, observing some problems that people have with exploratory testing.

There is a group of people who don't know at all what to do with an open book. If not given some constraints, they feel paralyzed. They need a support system, like a recipe to get started. The recipe could be to start with specifications. It could be to start with using the system like an end user. It could be using the system by inserting testing values of presumed impact into all the fields you can see. It could be a feature tour. It could really be anything, but this group of people want a limited set of options.

As we start working towards more full form exploratory testing, I find we are often debriefing to discuss what I could do to start. What are my options? What is possible, necessary, even right? There is no absolute answer for that question, but a seemingly endless list of ways to approach testing, and intertwining them creates a dynamic that is hard if not impossible to describe.

I find myself talking about the concept of options that expire. Like when you're testing, there is only one time when you know nothing about the software - before you started any work on it. The only time to truly test with those eyes is then. That option expires as soon as you work with the application, it is no longer available. What do you then do with that rare moment?

My rule is: try different things different times. Sometimes start with the specification. Sometimes start with talking to the dev. Sometimes start with just using it. Sometimes pay attention to how it could work. Sometimes pay attention on how it could fail. Stop then to think about what made this time around different. If nothing else, the intentional change makes you more alert in your exploration.

I've observed other people's rules to be different. Some people always start with the dev to cut through the unnecessary into the developer intent. That is not a worse way or better way, but a different way. Eventually what matters for better or worse is when you stop. Do the options expiring fool you to believe things are different than they are? Make you blind to *relevant* feedback?


Saturday, August 4, 2018

Test-Driven Documentation

I remember a day at work from over a decade ago. I was working with many lovely colleagues at F-Secure (where I returned two years ago after being away for ten years). The place was full of such excitement over all kinds of ideas, and not only leaving things on idea level but trying things out. 

The person I remember is Marko Komssi, a bit of a researcher type. We were both figuring out stuff back then in Quality Engineering roles, and he was super excited and sharing about a piece he had looked into long before I joined. He had come to realize that in time before agile, if you created rules to support your document review process, top 3 rules found find a significant majority of the issues.

Excitement of ideas was catchy, and I applied this in many of my projects. I created tailored rules for requirements documents in different projects based on deeper understanding of quality criteria on that particular project, and the rule creation alone helped us understand what would be important. I created tailored rules for test plan documents, and they were helpful in writing project specific plans instead of filling in templates. 

Over time, it evolved into a concept of Test-Driven Documentation. I would always start writing of a relevant document from creating rules I could check it against.
The reason I started writing about this today is that I realized it has, in a decade, become an integral part of how I write documentation at work: rules first. I stop to think about what would show me that I succeeded with what I intended, and instead of writing long lists of rules, I find the top-3. 

Many of the different versions I've written are somewhere on my messy hard drive, and I need to find them to share them. 

Meanwhile, you could just try it for your own documentation needs. What are 3 main rules you review a user story with? If you find your rules to be:
  • Grammar: Is the language without typos?
  • Format: Does it tell all the As I want so that ?
  • Completeness: Does it say everything that you are aware that is in and out? 
You might want to reconsider what rules make sense using. Those were a quick documentation of discussion starters I find wasteful in day to day work. 

Tests first, documentation then. Clarify your intent. 


Friday, August 3, 2018

Framing an experiment for recruiting

I was having a chat with my team folks, and we talked about how hard it is to feel connected with people you've never met. We had new colleagues in a remote location, and we were just about to head out to hang out and work together to make sure we feel connected. My colleagues jump into expressing the faceless nature of calls, triggering me to share a story on what I did this summer.

During my month of vacation, I talked to people over video, each call 15 minutes, about stuff they're into in testing. I learned about 130 topics, each call starting with the assumption that everyone is worthwhile yet we need some way of choosing which 12 end up on stage. Majority of them I talked face to face, and I know from past years that for me it feels as if we really met when we finally have a chance to physically be in the same space.

This is European Testing Conference Call for Collaboration process, and I cherish every single discussion the people have volunteered to have with me. These are people and everyone is valuable. I seek stories, good delivery and delivery I can help become good because the stories are worth it for the audience I design for.

This story lead to to say that I want to try doing our recruiting differently. I want to change from reading a CV and deciding if that person is worth talking to into deciding that if they found us worthwhile of their time, they deserve 15 minutes of my life.

I've automated the scheduling part, so my time investment with experience of 2 years and 300 calls is that I know how to do this without it eating away my time. All I need is that 15 minutes. I can also automate the rules of when I'm available for having a discussion and leave scheduling for the person I will talk to.

So with the 15 minutes face time, what will we do? With European Testing Conference we talk about talk ideas and contents of the talks. With recruitment process, we test for testers, and we code for programmers. And my teams seem to be up for the experiment!


My working theory is that we may be losing access to great testers by prematurely leaving them out. Simultaneously, we may be wasting a lot of effort in discussing if they should make it further in out processes. We can turn this around and see what happens.

Looks like there is a QE position I could start applying this next week! And a dev position we can apply it in about a month.

Meeting great people for 15 minutes is always worth the time. And we are all great - in our own different unique ways.

Wednesday, August 1, 2018

Seeing what is Right when Everyone Seems Happy with Wrong

I'm uncomfortable. I've been postponing taking this topic up in a blog post, convinced that there must be something I'm missing. But it may be that there is something I'm seeing that many others are not seeing.

A few months ago, I volunteered at work to try out crowdtesting on my product. It wasn't the first try of crowdsourcing, quite the contrary as I learned we were a happy user of one of those services for some other products.

The concern I had surprised everyone, providing a perspective no one else seemed to think about. I was concerned if using a service like this would fit my ideas of what is ethical. I was not focused on the technicalities and value and usefulness of the service but whether using the service was right. It isn't obviously wrong because it is legal. Let's look a little more where I come from.

I live in Finland, where we have this very strongly held idea that work needs to be paid for. Simultaneously, we are a promised land of non-profit organizations that run on volunteer work having  huge numbers of organizations like that in relation to number of people. But the overall structure of how things are set up is that you don't have free labor in companies.

The unemployed and the trainees get paid for the work they are doing. And this forces companies to be on good behavior in the social structures and not leech of the less fortunate.

So if my company in general wants someone to do testing, they have three options.

1) Customers Test
They mask it as "no need to test" and force their end users to test, and pay for structures that enable them to figure out what a normal user is actually complaining on, and structures for fixing things fast when the problems hit someone who not only raises their voice but walks out with a relevant amount of money.

2) Someone in their employee list tests
They pay someone for some of the testing, and some companies pay testers to get more of the whole of testing done. If you tested half of what you needed, you still tested. And this is the narrative that makes the current programmers test it all trend discussions so difficult in practice. There's testing and there's testing. It takes a bit of understanding to tell those two apart.

3) Someone in a service provider organization tests
They pay for a vendor for doing some testing. The vendor is an abstraction, a bubble where you conveniently allocate some of your responsibilities, and in many cases you can choose to close your eyes at the interface level.

Crowdtesting belongs to the third option, and relies in my view on closing your eyes at the interface level.  Crowdtesting is the idea of paying a company for a service of them finding you a crowd. They find the crowd by paying those testers, but I'm not convinced that the models of paying represent what I would consider fair, right and ethical.

So, I pay a legitimate company 5000 euros for them to do some testing we agree on under the label "crowdtesting". Yay! Is that really enough of thinking from my part? You get 30 testers with that, so cheap! (numbers are in scale, but not actuals). They even promise most of them will be in Finland and other Nordic countries. If your alarm bells aren't ringing, you are not thinking.

The more traditional companies producing things like coffee or clothes know painfully well it isn't. You really don't want your fashion brand be associated with child labor, inhumane working conditions or anything other nasty like that. Surely you can save loads of money using companies where their delivery chain hides the unethical nature of how the service is created, but you risk a reputation hits. Because someone hopefully cares.

Crowdtesting companies are not in the business of child labor, at least to my knowledge. But they are in the business of forcing my people (testers) away from being paid for their work, to be paid for their results. And with results being bugs and checkmarks in test cases, it's actively generating a place where folks in these programs are not necessarily paid for the work they do.

The way the monthly fee is set up for the customer makes this worse. There's a limit on how much of these paid checkmarks you can allocate a month, but you're told you're allowed to ask for unlimited exploratory testing  on top of that. The financial downside for the people doing testing in the low end of this power dynamic is that they are only paid for bugs customer accepts.

Many people seem really happy with doing testing in these schemes.

The ones promoted to the middle management layer get paid for their hours, leaving even less of the financial cake for the low end of the power dynamic. But why would they care, they get paid.

The ones privileged enough to not really need a job that pays you a salary get paid whatever, but since the money never mattered, why should they care? This is a hobby on the side anyway.

The ones living in cheaper countries getting paid the same amount as the testers from the Nordics may actually make a decent amount of money in finding problems, and might not even have a choice.

The really good ones who always find problems can also get paid, as long as the effort of finding something relevant isn't getting too high.

But the ethical aspect that I look at is local. The low end of the power dynamic are the testers kicked out of their projects who no longer need them, but actually do. They just no longer want to pay for all of their work and hide this fact in the supply chain.

Maybe I should just think more global. I don't know. But I am uncomfortable enough to step away. I don't want to do this.

With the same investment, I can get someone who works close to the team and provides more value. Without being exploited. With possibilities of learning and growing. Being appreciated.

And I say this looking from a higher end of that dynamic. I'm not losing my job in this shift.

I don't like crowdsourcing. Do you?


My first job and what has changed since

Falling for testing is a story I've shared multiple times in various places. It's not like I intended to become a tester. I had studied the Greek language on the side of high school, and as I was on my first years into University and Computer Science, someone put things together. They talked me into just trying the entry exercise comparing English and Finnish versions of Wordpad with seeded bugs, writing step by step problems reports. And when offered a job on the side, I just did not know how to say no.

Thinking back to that time, I had no clue what I was getting myself into. I did not know that falling for testing was like falling in love.

So I ended up testing Microsoft Access, the database-ish extension to the Office family - in Greek language. As new testers, we were handed out test cases used across multiple languages, and I think my office had four languages to test, Finnish being one of them. Looking back, I went to work whenever I had promised, did whatever was assigned for me, and probably did a decent job in following orders including tracking the work and diligently comparing the English and the Greek versions to identify if there were functional differences to log as bugs. I remember the fear of "QA" which back then meant that some of the senior testers at Microsoft would sample some of our test cases and see if they found problems we missed.

I had a very nice test manager, and as I was generally interested in how programs work, I was allowed to do something called "exploratory testing". I had absolutely no guidance on how to do it. I was just told how many hours I could use on doing whatever I wanted with the application.

Thinking back, I found myself stuck a lot. I had no strategies of how to approach it. I had a database project in mind, so I was basically implementing that stuff, creating some screens. I wasn't particularly diligent in my comparisons to the English version here like with the test cases. I had no ideas of how to think around coverage. With the information I have today, I know I did a bad job. I found no problems. I was handed a blank check and for all I know, I could have used that for just sitting at the coffee table drinking something other than coffee I never learned to enjoy.

Nowadays, if I'm handed a blank check like that (and I regularly am), I pay attention to value that investment provides. I create coverage outlines helping me make sense of what I have covered and realized I could cover. When I feel stuck, I decide on something I will do. I often find myself starting with tutorials or technical help documentation. I select something and figure it out. All of these are things no one told me to do back then.

The pivotal moment between then and now is the time I first time entered a project that had no test cases unless I created some. The change from a passive user of test cases to an active explorer is what sealed the love I still feel for testing.

The book I'm working on (https://leanpub.com/exploratorytesting/) hopes to capture some of the things I wish someone would have taught me when I was new. It builds on the basics to take people closer to testing I do now. That's the vision. Writing it down felt urgent enough to get up in the middle of the night.

Testing is the thing, not testers.