Saturday, October 13, 2018

Finding the work that needs doing in a multi-team testing

There's a pattern forming in front of my eyes that I've been trying to see clearly and understand for a good decade. This is a pattern of figuring out how to be a generalist while test specialist in an agile team working on a bigger system, meaning multiple teams work on the same code base. What is it that the team, with you coaching, leading and helping them as test specialist is actually responsible for testing?

The way work is organized around me is that I work with a lovely team of 12 people. Officially, we are two teams but we put ourselves all together to have flexibility to organize for smaller groups around features or goals as we see fit. If there is anything defining where we tend to draw our box that we are not limited by, it is drawn around what we call clients. These clients are C++ components and anything and everything needed to support those clients development.

This is a not a box my lovely 12 occupies alone. There's plenty of others. The clients include a group of service components that have since age of time been updated more like hourly and while I know some people working on those service components, there's just too many of them. And the other components I find us working on, it is not like we'd be the only ones working on them. There's two clear other client product groups in the organization and we happily share code bases with them while making distinct yet similar products out of them. And to not make it too simple, each of the distinct products comprise a system with another product that is obviously different for all three of us, and that system is the system our customers identify our product with.

So we have:
  • service components
  • components
  • applications
  • features
  • client products
  • system products
When I come in as tester, I come in to be caring for the system products from client products perspective. That means that to find some of the problems I am seeking, I will need to use something my team isn't developing to find problems that are in the things my team is developing. And as I find something, it really does no longer matter who will end up fixing it.

We also work with a principle of internal open source project. Anyone - including me - in the organization can go do a pull request to any of the codebases. Obviously there's many of them, and they are in a nice variety of languages meaning what I am allowed to do and what I am able to do can end up being very different.

Working with testing of a team that has this kind of responsibility isn't always straightforward. The communication patterns are networked and sometimes finding out what needs doing feels like a puzzle to solve where all pieces are different but look almost identical. To describe this, I went to identify different sources of testing tasks for our responsibilities. We have:
  • Code Guardianship (incl. testing) and Maintenance of a set of client product components. This means we own some C++ and C# code and the idea that it works
  • Code Guardianship and Maintenance of a set of support components. This means we own some Python code that keeps us running, a lot of it being system test code. 
  • Security Guardianship of a client product and all of its components, including ones we don't own. 
  • Implementing and testing changes to any necessary client product or support components. This means that when a team member in our team goes and changes something others guard, we go as team and ensure our changes are tested. The maintenance stays elsewhere, but all the other things we contribute.
  • End to end feature Guardianship and System Testing for a set of features. This means we see in our testing a big chunk of end users experience and drive improvements to it cross-team. 
  • Test all features for remote manageability. This means for each feature, there a way of using that feature that the other teams won't cover but we will. 
  • Test other teams features in the context of this product to some extent. This is probably the most fuzzy thing we do. 
  • All client product maintenance first point of support. If it does not work, we figure out how and who in our ecosystem could get to fixing it. 
  • Releases. When it's all been already tested, we make the selections of what goes out and when and do all the practicalities around it. 
  • Monitoring in production. We don't stop testing when we release, but continue with monitoring and identifying improvement needs.
To do my work, I follow my developers RSS feeds in addition to talking with them. But I also follow a good number (60+) components and changes going into those. There is no way anymore Jira could provide me the context of the work we're responsible for, and how that flows forward. 

I see others clinging to Jira with the hope that someone else tells them exactly what to do. And in some teams, someone does. That's what I call my "soul sucking place". I would be crushed if my work was defined to do that work identification for others. My good place is where we all know the rules of how to discover the work and volunteer for it. And how to prioritize it, what of it we can skip for low risks related to others already doing some of it. 

The worst agile testing I did was when we thought the story was all there is. 

Thursday, October 11, 2018

How to Survive in a Fast Paced World Without Being Shallow


As we were completing an exercise into analyzing a tiny application on how would we test it, my pair looked slightly worn out and expressed their concern on going deeper in testing - time. It felt next to impossible to find time to do all the work that needed doing in the last paced agile, changes and deliveries, stories swooshing by. Just covering the basics of everything was a full time work!

I recognized the feeling, and we had a small chat on how I had ended up solving it by sharing much of the testing with my teams developers, to an extent where I might not show up for a story enough to hear it swoosh by. Basic story testing might not be my choice of time, as I have a choice. And right now I have more choices than ever, being the manager of all the developers.

**Note: the developers I have worked with in the two last places I work in are amazing testers, and this is  because I don't hog the joy of testing from them but allow them to contribute to the full. Using my managerial powers to force testing on them is a joke. Even if it has a little truth into it. 

Even with developers doing all the testing they can do, I still have stuff to test as a specialist in testing. And that stuff is usually the things developers have not (yet) learned to pay attention to.

For browser-based applications, I find myself spending time browsers other than developer's favorite and with browser features set away from usual defaults.

For our code and functionality, I find myself spending time interrogating the other software that could reside in the same environment, competing for attention. Some of my coolest bugs are in this category.

For lacking value on anything, I find myself spending time using the application after it has been released, combining analytics and production environment use in my exploration.

To describe my tactic of testing, I was explaining the overall coverage that I am aware of and then choosing my efforts in a very specific pattern. I would first do something simple to show myself that it can work, to make sure I understand what we've built on a shallow level. Then I leave the middle ground of covering stuff for others. Finally, I focus my own efforts into adding things I find likely that others have missed.

This is patchy testing. It's the way I go deep in a fast based world so that I don't have to test everything in a shallow way.

Make a pick and remember: with continuous delivery, you are never really out of time for going deeper to test something. That information is still useful in future cycles of releasing. At least if you care about your users.


Saturday, October 6, 2018

Time warp to the Principle of Opportunity Cost

This Friday marked a significant achievement: we had 5-figure numbers of users on the very latest versions of the software we worked on every single day. Someone asked about time from idea to production, and learned this took us seven years. I was humbled to realize that while I has only been a part of two, I had pretty much walked through the whole path of implementing & testing and incremental delivery to get where we were.

When I worked at the same company on sort-of-same products over 12 years ago, one of the projects we then completed was something we called WinCore. Back then the project involved combining ideas of a product line and agile to have a shared codebase from which to build all the different Windows products from, I remember frustrations around testing. Each product built from the product line had pieces of configurations that were essentially different. This usually meant that as one product was in the process of releasing, they would compromise the others needs - for the lack of immediate feedback on what they broke.

Looking at today, test automation (and build automation) has been transformative. The immediate feedback on breaking something others rely on has resulted in a very different prioritization scheme that balances the needs of the still three products we're building.

The products are sort-of-same meaning that while I last looked at them from a consumer point of view, this time I represent the corporate users. While much of the code base servers similar purposes as back then for the users, it has also been pretty much completely rewritten since, and has more things it does than it did back then. A lot of the change has happened so that testing and delivering value would flow better.

Looking at the achievement takes me back to thinking of what the 12-years younger version of me was doing as a tester, compared to the older version of me.

The 12-years younger version of me used her time differently:

  • She organized meetings, and participated in many. 
  • She spoke with people about importance of exploratory testing with emphasis of risks in automation, how it could fail.
  • She was afraid of developers and treated them as people with higher status, and carefully considered when interrupting them was a thing to do.
  • She created plans and schedules, estimated and used efforts to protect the plans with metrics
The 12-years older version of me makes different choices:
  • Instead of being present in meetings, she sits amongst people of other business units doing her own testing work for serendipitous 1:1 communication. 
  • She speaks for the importance of automation, and drives it actively and incrementally forward avoiding the risks she used to be concerned for. She still finds time to spend hands-on exploratory testing, finding things that would otherwise be missed. 
  • She considers fixing and delivering so important that she'll interrupt a developer if she sees anything worth reporting. She isn't that different from the developers, especially on the goals that are all common and shared.
  • She drives incremental delivery in short timeframes that removes the need of plans and estimates, and creates no test metrics.
Opportunity cost is the idea that your choices as an individual employee matter. What value you choose to focus on matters. You can choose to invest in meetings or on 1:1 communications. You can choose to invest in warning about risk or making sure the risks don't realize. You can choose to test manually or create automation scripts. When you're doing something, you are not doing something else. 

Are you in control of your choices, or are someone else's choices controlling you? You're building your future today, are you investing in the better future or just surviving with today? 



Wednesday, October 3, 2018

Chartering for Exploratory Testing

As exploratory testing is framed around learning and discovery, done by a person, it is unnatural to split it as per test cases and instead we use time, often referred to as session. Some folks have given suggestions that a session (time-box) is uninterrupted and focused, and that is quite natural thinking of the learning nature of exploratory testing. If you find yourself distracted and interrupted, the likelihood of doing the same starting work many times and not making much of a progress is high. There's different ideas of what the uninterrupted time can be, and also on what types of interruptions really matter so much that you need to break out of your reporting unit.

Some talk about doing a pomodoro - 25 minutes referring to research on how us people can focus. Some talk about at most 2 hours. My personal preference is to deal with a unit of "days of work" or at most "before lunch" and "after lunch" half days and mind a little less about the interruptions.

With session as the unit, before going into that unit of time, it makes sense to stop and think about what would you be doing. Since test cases make little sense, in exploratory testing we've come to talk about charters. Charter is an idea guiding you while you are going into exploration. What would you try to do? What would you focus on? How would you tell if you're done as in task completed, or done as it time run out?

Elisabeth Hendrickson proposed in her book Explore IT a template that would be helpful in agile all-team sharing exploring type of context in sharing the ideas of what needs to be tested with charters. The template to help thinking is:
Explore . . .
With . . .
To discover . . .
I’ve not cared much for the charter template, and rather than looking for a particular form of a charter, I rather think of the timeframe and goal setting for myself. I have no issues of using a user story as my charter, and even using the same user story with an idea of paying attention to a particular perspective on consecutive sessions. A lot of times I cannot even say I have a charter for a specific session other than “get started with testing, figure out what you got done”.

Today, my team’s tester brought in a list of features and perspectives. They were not organized as charter, but it was clear that they could have been. But that would have meant then they would be fixing their ideas of how they combine them prematurely. Sometimes the need to charter (in writing) in agile teams is creating this idea of “check this, done”, where each of them is an open ended quest for information, and can / should both create new charters and transform older charters into something better using the learning the testing done is giving.

If I write charters, I write one for each who is testing, and debrief to create the next ones after the first ones are completed.

A lot of times I don’t need to share charters exploring with others. I need to share questions, ideas of documentation (automation), and and bugs.

There is a problem before chartering where a lot of testers stumble, as per my observation - having the skills to generate versatile ideas. I was watching a candidate for a job today test in front of my eyes and slightly surprised on the low number of ideas they would consider given an application, expecting a specification to prompt them for all things relevant. At best times, spec exists and is useful, never complete. Charters are only as good as the ideas we have to put into them. 

Deep Testing and Test Levels

Back in my days of 2002, I have written an article for an academic conference that basically centered around the idea that test levels (as they were taught much then without a "test automation pyramid") while not time-based are useful in agile. These days, I rarely speak of this idea any more, but it is a foundation I speak from.

I came back to think about this after my Deep Testing post a few days ago, as Lisa shared:

Since I have written about the very same levels, I felt like I wanted to express how I model test levels as a very different idea than the depth of testing. Depth works as a synonym for words like "bad quality" = shallow and "good quality" = deep, and multi-dimensional coverage. Levels as a concept for me is both more shallow and serves a different purpose.

Levels of testing tell me that as an observer of testing, there is one helpful set of glasses I can wear to notice information about the system. Looking at the details of the leaf in a tree, it may be hard for me to appreciate what makes up the tree and why it matters, or how trees make up a forest or how forests belong into the world as lungs of it. Looking at things on different levels leads me to generate a little bit different ideas. I may or may not act on those ideas. I may or may not recognize that those ideas even exist.

That is where depth comes in. If I don't have the skill to use the heuristic of levels to see things, my testing, even if it happens on all of the different levels is shallow. It finds easy to spot bugs, that I'm ready to spot with the learning of the system I have done so far.

Depth speaks about my perceptions of trustworthiness of the testing performed. Shallow is testing that you perform with your mind's eye more closed, with single heuristic applied and not doing complex modeling on multiple dimensions. Deep is testing you do that finds more of the important things, things that are not straightforward, things that are not just stuff users find when left alone, but that users trip on when you watch them using the system and they don't even understand they could be asking for more and better. Deep testing is for the problems where your system is down for 5 minutes and everyone just accepts that because no one can reproduce how you get there and why no one even needs to do anything to recover from the problem. Users just know to go for coffee when that happens.


Tuesday, October 2, 2018

Finding bugs serendipitously

Serendipity means 'lucky accident'. As I speak of doing shallow exploratory testing, a colleague expressed their fear of finding all the bugs they find serendipitously.
"I feel like most of my bugs are serendipitous, and that concerns me."
I wanted to share a story, and a perspective.

As I joined a new job, the one before this, I was determined to do hands-on good quality testing on my first week at the new job. I've had the experiences of joining companies before, where I find myself being trained into the company, without actually doing any of the work I was hired for in the first weeks. And I wanted things to be different. I wanted the old saying of "people taking months in a new job before they are productive" not to be true and set that out as my goal.

As I arrived office, they gave me access to the system I was to test. I could barely get my computer open, log into the system with my credentials, bookmark the page to remember where the system was and I was already dragged into a four-something-hour meeting spree where they poured information into my head I have absolutely no recollection on.

In the afternoon, I returned to my computer with the original determination, and I opened the application only to see a big visible crash.


I had done NOTHING. No use of my brilliant testing skills. Very very shallow testing at best. Anyone would see this problem. Except they did not.

I had serendipitously found a bug of linking one particular subpage of the application (I had managed to click ONE thing after logging in before linking it) that crashed when login was no longer valid, and when we investigated the bug with the developers, we learned it was also the ONLY subpage of that type. I honestly got lucky that day, but I would have over time increased my likelihood of running into this with the ideas to do exactly this I was in control of because of the Elisabeth Hendrickson's Cheat Sheet. 

A lot of the depth in testing comes with skill, and knowing how to exercise a variety of ideas. But much of it also comes from serendipity combined with recognizing problems when you see them (a skill!) and just sticking with the applications longer. 

Serendipity sounds like just luck, but it is particular kind of luck combined with skill and perseverance. 

Monday, October 1, 2018

Deep Exploratory Testing

There's a famous saying by Linus Torvalds:
Given enough eyeballs, all bugs are shallow. 
Crowdsourcing references often like to quote this, pointing out that out of the bugs we could find in testing, the users in production end up finding over masses all the relevant ones, even if they did not report. A crowd could do well in hitting a bunch of bugs.

For the purposes of me doing and guiding exploratory testing, I find it really beneficial to think in terms of shallow vs. deep testing. Shallow can be done with less skills, and with less time. Deep testing requires more skills, more insights, a foundation of learning that is built in layers and requires time.

Many people find that agile somehow guides them to only doing shallow testing. They feel their testing is always squeezed to the end of the sprints, and that it is so that development schedule is flexible, while testing schedule is fixed. However, they may fail to see the opportunity of testing continuing after the release, focusing on going deeper.

Shallow testing find shallow bugs. Shallow bugs are easy to find, they are obvious and would become a problem in production immediately. Deep testing finds deep bugs. It may lead us shallow bugs that just take a bit more of an effort to see, combinations and conditions that take time to set up. But it also may lead us to bugs some don't consider bugs: things that threaten the value of the product, things that should be different to be better.

Going deep happens in layers. You don't repeat the same, but you go further, deeper. You start before implementing. You continue while implementing. You don't stop for releasing. You don't have to, because you are not on a project. Agile made it a continuous process where there is no end.


Sum it up, and it totals to deep testing. Miss the skills, and all you get is shallow. The additive way of doing testing is not regression testing. It is finding new perspectives and exploratory testing is the core practice in doing that. 

Saturday, September 29, 2018

Outing a Skeleton to Reclaim Control

When I told a friend I had allowed a boyfriend to take nude images of me, they were visibly worried. They recounted stories of how it had turned bad, and I brushed it off without paying attention. I was an adult, in a committed consensual relationship that involved him traveling a lot. I took the chance to make someone I really cared for happy. And he was happy.

He enforced more images and texts by being happy. By making me feel he appreciated me writing sex stories of our time together. But he also enforced this by messages of how hard being monogamous was for him, how important it was that he would be getting what he needed. And I did my share, writing over 50 stories.

As years passed, we grew apart, and I could no longer cope with the way he left me feeling: blackmailed, on the edge of receiving a call in the middle of the night of "following his heart". When I finally got that call I was afraid of all these years, I realized my fear had become a self-fulfilling prophecy. His heart had found someone new, and I was free of the nagging feeling this would happen one day. Like so many others, I realized we were never right for each other - our values were different. 

With the change of status, the materials I had created for in-relationship entertainment became an issue. I asked him to delete it. He refused. I asked again, over and over. He refused. 

For a few hours, I asked twitter until a friend contacted me to take the tweet down. She said it was hurting my professional image. It wasn't helping, my ex was not budging on the deletion. Like he said on his worst quality: "I'm right, a lot". He wasn't but he would remain stubborn. 


For a month and a half, I have been trying to convince him without success. Instead, I've successfully turned the feeling of powerlessness into nightmares. With long consideration, I'm outing this skeleton in attempt to reclaim control. 

In any sexual relationships between two adults, consent plays a role. I have expressed in so many words that he does not have my consent now or ever again to itemize me for his sexual pleasure. There's plenty of porn out there where consent is available, this material needs to be deleted. 

I'm not comfortable with the risk of revenge porn even if I believe he would not do it intentionally. I'm not comfortable with him ever using me as an item for his pleasure, and he should know that from the fights that lead to us not being a couple. 
My consent is mine to give, and mine to take away. He does not have it for use of this material.
I'm not comfortable with this material being a skeleton in my past, haunting my future. It exists. I've said it. But it only exists with him, and only because he refuses to delete it

I would hope people who know Llewellyn Falco  would manage to talk sense to him. I clearly don't manage to do so. His parents don't manage to do so at an adult age when he is right when he isn't.

UPDATE Oct-10th: Llewellyn confirmed he has deleted the materials. Thanks for support everyone. 

I've investigated my options legally, but unfortunately I'm not living in a country that has yet understood the idea of pre-emptive decisions on revenge porn. Existence of it threatens me professionally and privately much more than reclaiming my control to this information. 

When I talk to someone like me, like my friends talked to me years ago I have a personal experience to share on why you should *never* create material like this. Love fades. And that kind of material, intended for the relationship, may not be part of the contract to clear when the relationship ends. 

This material hurts women disproportionately. 




Experiment away: Daily Standup Meetings

In agile, many of of us speak of experiments. There's a lot of power with the idea of deferring judgement and trying things out, seeing what is true through experience rather than letting our mind fool us with past experiences we mirror into what could work.

Experimenting has been my go to way of approaching things. The best thing coming out of it for me has been mob programming. A talk four years ago by Woody Zuill introducing something I intellectually knew I would hate to do, that ended up transforming not just my future but the way I see my past. With cognitive dissonance - the discomfort in the brain when your beliefs and actions are not in sync - with mobbing my beliefs got rewritten. If I was ever asked for top three advice on what to do, experimentation would be on my list, as well as not asking for permission and stopping list-making to actually get stuff done.

Experiments with practices and people are not really very pure, they are more like interventions. The idea of trying before judging has power. But we keep thinking that for the same group, we could try different options without options expiring. The real thing is however that when we do one thing, it changes the state of our system of humans. They will never be as they were before that experience and it closes doors. And as a door closes, other one opens. Experimentation mindset moves us into states that enable even radical changes when the state is right for that transition to happen.

My internal state: Dislike of meetings

Four weeks ago, our team had just received three new members all of a sudden. Our summer trainee had changed into a part time appearance as school required their attention. My colleague appeared to feel responsible for helping people succeed so they turned to their default behavior: managing tasks on Jira, passing their information in writing and turning half of my colleagues into non-thinking zombies working in the comfort of "no one wrote that in the Jira ticket". We talked about our challenges, and they suggested we need to bring back daily meetings. I recognized my immediate strong negative response and caught it to say: "Yes, we should experiment with that".

I left the discussion feeling down. I felt like no one understood me. I felt like the work I loved was now doomed. I would have to show up in a standup meeting every day, after all I was the manager of the team. I would have to see myself stopping work half an hour before the meeting not to be late (being late is a big personal source of anxiety) and see again how a regular meeting destroys my prioritization schemes and productivity. It being in the middle of the day, I would again start working at a schedule that is inconvenient to me just to make space for uninterrupted time.

I knew I had worked hard to remove meetings from my job (unlike most managers around me, where meetings are their go-to mechanism for doing their work) and now the new joiners were forcing me to go back to a time I was more anxious and unhappy.

It's Just Two Weeks

Framing it as experiment helped me tell myself: "It is just two weeks, you can survive two weeks."

I sent the invites to the agreed time, showed up every day, tried hard to pay attention to sharing stuff of value and focused my energies in seeing how others were doing with the daily meetings.

I was seeking evidence against my strongly held stance.

I learned that there are many different negative reactions daily meetings can bring forth.

  • Some people share every bathroom break they took, every trouble they run into and focus on explaining why it is so hard for them to not make progress. 
  • Some people come with the idea of time boxing and mention always two things. You have to say something, so they choose something they believe they need to say.
  • Some people report to others, some people invite others into learning what they've learned, others pass work forward. 
  • Some people have low idea of their contributions and frame it into saying things like "I tested since yesterday, I will test more by tomorrow" - day after day. 
  • Some people collect the best of the last day to share in the meeting, and hold on to information for that meeting instead of doing the right thing (sharing, working with others) when they come to the information. 
  • Some people are happy with the meeting because they have NEVER tried any other options: pairing, mobbing, talking freely throughout the day, having a culture where pulling information is encouraged to a level where your questions always have the highest priority. 
Only one of us expressed they liked the meetings after two weeks. We did not make them work really well. We did not find a recipe that would bring out the best of our collaboration in those meetings. I could not shake the feeling that I was drowning into my "soul-sucking place", agile with rituals without the heart. So we agreed to stop. 

Things already changed

We could say the experiment failed. FAIL as in First Attempt in Learning. It did not stick. But it changed us. It made the problem some people were feeling visible that we did not talk about the right stuff and help our newbies. It changed the way people now walk to one another, giving us a way of talking of the options to a daily meeting. It changed how the team was sharing their insights on teams channel. 

Things were better, because the experiment opened us up for other possibilities. 
Experiments in agile teams are not really experiments. They are interventions that force us to change state.
So, Experiment Away! We need to be interrupted.

Friday, September 28, 2018

Managing Testing based on Threads

Session-based Test Management is a form of managing testing based on sessions - time boxes where ideally we could maintain focus, have a clear charter and generate some metrics by counting things around time boxes and results. I believe it is not one practice but many, but way too many ways we manage testing based on session get bundled with the original description of a technique that is very specific.

Later on emerged the idea that sometimes assuming you could maintain focus for a time box may not be possible. You might be interrupted, for various reasons. You may be jumping between this thing and that thing, and that way of managing testing was then dubbed Thread-Based Test Management.

Here, I wanted to describe one very recent experience from my work, that utilized managing testing based on threads, and some observations around how we ended up organizing and what unfolded in the activity.

Yesterday, we were wrapping up a release. Making a release in general is still a bit of an effort in my team, even if we have moved from it taking a week into it taking a few hours. But yesterday's release was special. It was a release we were shaping together to introduce two major product upgrade paths. each with their own logic, risk of dependencies in a complex environment and need of many pairs of eyes in verifying the functionality, each configuration and environment and problems we may be experiencing and resolving on the go.

We had strategized on a whiteboard the day before, in a session I considered magical. The three people around the whiteboard built on each other, added knowledge, clarified priorities and made an action plan we were ready to execute. The overall work would be done by eight people, which means a bit of coordination is required unless we somehow magically dance just right together.

In the morning, we got the build we through could be the one. We knew there were many things to do, and started a discussion thread - one single thread - in out teams chat. It did not take long for it to grow to hundreds of messages, and confusion to emerge on who was doing what, who was talking about what and what was really done and what was just a miscommunication. A colleague jumping into the discussion at some point exclaimed the awfulness of mixing it all there, and while I and the two others around the whiteboard could track with the model inside our heads, the other five people and everyone watching was in pain.

Seeing it did not work, we came to two immediate solutions.

First a Jira ticket got created by one of us just announcing they would now track the work there because the discussion was awful. They wrote down 13 steps to making a release, mentioning the two upgrade paths each as one step.

Almost same time, I pulled out the tasks intertwined and introduced them as threads: things we'd try to drive through, with some idea of priority and their status as they were named.

  • Thread 1 was about "release as usual" - all the moves we had done and practiced for all the previous releases and it was just business as usual. 
  • Thread 2 was about the most business critical of the two upgrade paths, and we had not set up the environment to be able to test that at all.
  • Thread 3 was about the second upgrade path and we had just identified a blocking bug that needed addressing before it would make sense to finalize it
  • Thread 4 was a surprise path of fast forwarding thread 3 in a specific way
  • Thread 6 was doing thread 2 in production environments
  • Thread 7 was doing thread 3 in production environments
Half a day later, we added thread 5 (because just for the fun of it, of course someone needed to make a joke of an off my one error) on yet adding test automation for thread 2&3 and not accepting to have to do this stuff without help of some tooling ever again. 

Teams wasn't making the thread management that easy, jumping focus where ever someone typed in something, and due to the abilities for us to use teams in anything but one long thread, we did not always trust a comment hit the right one. But they did, and we clarified and worked through each thread. The visibility per thread enabled people to call out for help in a more specific way, identify problems with each of the separate goals we were working towards and make priority calls of what we'd do first and later as this was heavily changing as we identified and investigated problems.

The Jira ticket looks clean, as if one person did it all. The threads in the discussion enabled us to start with a task that we discovered as it was happening, and coordinated many people pitching in information from test results, investigations, availability of fixes and plans for next steps and definition of being done with each of the threads. 

I wanted to share this experience as a reminder that threads may be a thing you want to visualize to share the load. Uninterrupted time is not something to use on all tasks. Sometimes threads are the best thing you can do. And they may enable discovering the work that really needs doing, whereas Jira ticket gets people to deliver what was asked and forget to discover many things that are implicit. 


Wednesday, September 26, 2018

Navigating in the wrong level of abstraction

It was a big agile conference in the US, and I participated a session on with a demo of mob programming. Demo meant the facilitator of the session called for volunteers, and of the 5 people that ended up in front, I was one.

The group of 5 was two women and three men. It wasn't my first time mobbing, it wasn't my first time mobbing on the particular exercise and I was looking forward to seeing how the dynamics with this team of random people would turn out.

The exercise we were doing followed a routine of example to English, English to test code, test code to implementation.

I remember that session as awful but I have not talked about how awful it was because I was told back then that I just did not understand it right. So today, I wanted to write about the idea of mild gaslighting when someone is seeking acknowledgement for their feelings you think they should not have felt.

The other woman in the group went before me as both navigator and then driver, and made it very clear to everyone it was her first time programming. Mob programming is great in the sense that this is possible, and in addition to remembering her emphasis on not knowing a thing, I remember the joy she felt and expressed on having contributed on creating something. As a total newbie, the group navigated her on the lowest possible level of abstraction. We'd tell her letter by letter what to write, not using conceptual language at all.

So when the next two in the rotation after her were men, the difference of how people navigated her and how they navigated the men was very obvious.

My discomfort came from being bundled together with the other woman. As far as I can see, I did none of the "help me, I know nothing" mannerisms. I sat with the crowd, navigated with the crowd, knew what I was doing. And yet the moment I sat on the driver seat, I was navigated on keystrokes, no concepts. I felt humiliated and paralyzed into a role I hated. The mob navigated me on keystrokes, and I was unable to correct them. I was holding myself together not to ruin things for everyone else.

As soon as the session was over, I expressed my emotions to the facilitator who told me it was all my fault. I would have needed to assert myself by saying "just tell me what the thing is you want me do", I was supposed to not feel like I was just treated as not a programmer but a woman trying programming with a mob, bundled together based on my gender with the other "representative". I missed out on the joy she was feeling, so we were clearly individuals regardless of the bundling.

My lesson of this is that to not offend people in ways they remember years later, you may want to always err on the side of navigating on an abstraction level that is too high rather than too low. Tell the intent and if you see no movement, then tell location and details. If you jump directly to the details, there's a chance you're explaining things your driver won't need, introducing inefficiency as well as making the other feel like they've been weighed on their looks and found not worthy.

It would have felt a lot better if the facilitator would have at least acknowledged my feelings. But as usual, it was a problem with me: me feelings were wrong, as a result of my (lack of) actions.

I rarely volunteered in mobs after this unless they would run a long time, unless I was the only woman or unless the other woman was even more of a programmer in her manners than I was.

Thursday, September 20, 2018

Pairing and Mobbing are not the only ways to collaborate

A friend I enjoy connecting with on testing themes sent me a message today. They expressed they had enjoyed my pair testing article on medium, and had an idea they wanted to contribute on type of testing I had not addressed.

This was a form of collaboration where two people (a pair, you may say) worked on two computers, side-by-side on the same task. The two might be intertwined in the activity so that one looks one end, the other another end. And it is very much collaborative. They suggested this would be a model of pairing with two drivers, both with their hands on the keyboard - a separate one though.

In my article, I introduced the traditional style pair testing, where often the navigator would be making notes, deciphering what was going on with the driver who had control. Nothing says the second person couldn't be taking those notes on a computer, but what typically happens is that the notes, in comparison to strong-style pair testing, are of a private nature and don't always reflect what was going on in a manner that is shared between the pair.

Similarly, here with two computers and testing side by side, we can figure out many ways to still work in a very collaborative manner.

I find myself often testing in ways where there is two or more testers in the same space, at the same time, sharing a task while we still are not mobbing or pairing. It's more about sharing energy, and enthusiasm, giving hints of things others can pick up and run with, and continuously coming back together to understand where we are.

I tested that way today with a developer, over a teams chat. We would be calling out our progress and observations, pondering on things that did not work out quite as we thought. The chat version works when people are active in sharing. I absolutely adore this dev for the way he is transforming the team now from his position of seniority, and driving collaboration without pairing forward.

In the past, some of my most fun days have been when we work side by side with other people, testers and developers, growing ideas and solving problems. While formal pairing and mobbing are great and give a structure, the other ways of working together also beat the "alone in a corner" any day. Some tests require many computers, like the test at Finnish Parliament where they had to bring in 200 people serving with the Finnish military forces to control 200 devices, for one test to be completed.

No matter what, we should be learning and contributing. Work is so much fun, and it makes no sense to dumb or numb it down. 

Wednesday, September 19, 2018

Forgetting what Normal Looks Like

Today I reached 2 years at my current job that I still very much love. There's been mild changes to what I do by changing my title from "Lead Quality Engineer" to "Senior Manager", that shows up mostly in whole team volunteering more easily to do testing tasks and enable testability without me asking any differently than before. There are a few reasons why I love my job that I can easily point to:
  • We have team ownership of building a better product. No product owner thinks for us, but some business people, sales people and customers think with us. From an idea to implementation and release, we've seen timeframes in days which is very unusual. 
  • It's never too early or too late to test. Choosing a theme to give feedback on, we can work on that theme. With the power in us as a team, the tester in me can make a difference.
  • We can do "crazy" stuff. Like kick out a product owner. Like not use Jira when everyone else worships it. Like pair and mob. Like take time for learning. Like have a manager who closes eyes to push the stupid approve button that is supposed to mean something other than "I acknowledge you as a responsible smart individual who can think for themselves". Like get the business managers to not book a meeting but walk in the room to say hi and do magic with us. Like pull work, instead of being pushed anything. 
  • We are not alone, not stuck and generally people are just lovely even if there is always room for making us flow better together.
Living a life on the edge of "crazy" to some is fascinating when people don't all have the same past experiences. In particular, in last months we have had new people join who have brought in their past experiences from very different kinds of ways of working: with daily meetings, detailed Jira tickets, thinking for the developer, etc. 

I've been experimenting with finding a better, more fun way so long that I start forgetting what normal looks like. Today I found some appreciation for it. 

At almost two years in the job, I finally started a "Jira cleanup" about two weeks ago. I edited our team query to define what belonged to us: anything marked for the team, and anything marked for any of us that belonged in the team. All of a sudden for those few who had cared for the list based on their past experiences, realized that the list was much more significant that they had realized. About 120 items. We called for an old rule: all other work will wait until we are below 50. We didn't get to it though. Some people cleaned some of things up. Others were busy working on other things. 

Instead of one on one clearing of the personal lists, we called a workshop to share the pain of cleaning the lists. I had no idea what I might be learning with the workshop. 

I learned that half of the people had not used Jira beyond the "look at this individual ticket" use case. Seeing what was on the whole team list - a new experience. Seeing that you can drag-and-drop stuff like post-its - a new experience.  Marking a case into a state or assigning it to a person - a new experience. 

Even with the Jira avoidance I advocate for (fix and forget immediately, track themes on a physical whiteboard) I had not come to understand that I might have missed sharing a basic set of skills for a group of people coming to us from elsewhere, with other expectations of what it would mean.

A healthy lesson of remembering that what is obvious to me may not be such to others. And that building complex things on lessons that are not shared might make less sense to those with less experimentation experiences under their belt. 

Saturday, September 15, 2018

Attribution for Work Performed in a Mob

In 2017, I delivered a keynote at StarWest, talking about Making Teams Awesome. One of the main themes I talked about back then was a theme I had put a lot of effort into thinking: attribution. There's many things I have problems with around this:

  • Women's contributions get disproportionately dismissed - not just ideas, but also the work they do - and attributed in memories to other people. 
  • Group work is different from individual work, and my needs of attribution can get in the way. 
  • Ideas are not worth more than execution, especially in software development
  • Attribution should not be a blocker for communication of ideas, a new form of apology language required disproportionately from women. 

In June, I used a phrase "invitation to explore" in a conference talk as one of *my points*. In speaking, I mentioned a colleague I respect as someone who uses that phrase. Meanwhile, he took offense on me using those words without writing his name down. My conclusion: fuck the language police, I will never again even mention him with this phrase. The words are free and I live that idea on a daily basis.

Caring for Credit

We all care for credit, I would claim. All of us. And some of us, like myself, care deeply for the credit of others too. Yet, credit is a messy and complicated theme.

We all know the name Albert Einstein. The fellow who pretty much created the foundation for modern physics. The guy whose picture posted on a slide everyone recognizes, even though the pictures are from very early days of photography. The man has been dead for a while, and yet we know him. Probably for generations to come.

But we don't know who is Mileva Marić Einstein. We can deduce a lot from the last name, and recognize that there is a relationship there. She was a physicist who, due to her gender, had trouble getting the education in physics but went through it anyway, and contributed significantly to Albert Einstein's groundbreaking science. History forgot her, partially for the rules of society that required removing her name from scientific articles she worked on intended for publication.

Then again, we know Marie Curie. Another brilliant scientist of that time. Someone who had a partner who refused to let her credit stay out of the limelight. So credit is something to pay attention to. It defines history. It matters.

It matters to people so much that for a long time, we had to wonder why there are no baby dinosaurs? They were misidentified as other species for reasons of credit. 5/12 dinosaur species identified were not new discoveries, but juvenile versions of ones found earlier.

Credit in a mob

With personal history of protecting my credit heavily as a necessary self-preservation mechanism, this was a difficult one for me when I started working with mobs. I would see my contribution, and see it be forgotten in a retro less than an hour later! With the feeling of safety, I could bring out my own contributions in the retros, have them discussed and get back to recognized. I could write about them in my blog while everyone else dismissed them. But in many relevant ways, I had to learn that
The Best Ideas Win When You Care about the Work over the Credit. 
I needed to let go. But so did everyone else.

When a group of people works together, it becomes difficult to identify what was the contribution of each of the members. Like with baking a cake, you cannot identify if it is the flour, the sugar, the eggs or the vanilla sugar that makes the recipe just right and perfect. People inspire one another, build on one another and create together things they would not create alone.

Also, mobs had visitors. It was a living organism where people would come and go. It wasn't as simple as that. There would be credit for being there when it all started. There would be credit of visiting, short- or long-term. How do we credit things that are done collaboratively?

Visiting isn't infecting other people's contributions as "they are now all someone else's". We need to care and be fair.

The Mob Programming Guidebook

In 2016, I acquired a book name from LeanPub: Mob Programming Guidebook. I set up a GitHub project to write it. The first rough version of it was scheduled to be available for a conference session I was invited to do.

For all of this, I paired with Llewellyn Falco. We wrote in strong-style. We argued over words, and ideas. Our collaboration worked while it worked, and then the book was in hibernation for a while, because we couldn't work together. We couldn't work together to a point where it became evident we would never again work together.

With a book 30% written we needed to go our separate ways. In the split, both of us get to keep the results of the collaboration but neither of us can block the other from taking it forward. Claiming equal authorship for work we are not doing equally makes no sense. So I added 25 % more text to match my vision to a still unfinished book, initiated all the work I had been postponing on making it a full book it was intended to be and moved Llewellyn away from second author to a contributor to reflect his role to the book. The book is still not done. There's significant work ahead of me, including refactoring a lot of the paired text to vision I aspire for.

Books can have many authors even if one wrote most of the text. But that would mean that I would believe that the book wouldn't exist without their support and contributions, and looking into how I feel I honestly cannot say that. The book wouldn't exist without meeting Woody Zuill. The book wouldn't exist without me. But the book would exist without Llewellyn Falco, and does exist without him.

His role to the Mob Programming Guidebook is only a little more than his role to all Mob Programming books. Mob Programming grew out of his ideas around strong-style pairing. He has the choice of writing the full book that describes his ideas but I suspect he will not.

I want to acknowledge that for the 30% book, we inspired one another to create what is there. The foundation is co-created. But the book I work on forward, alone, is not the foundation. It is more. It's founded on all the experiences of leading and participating mob programming sessions, and my discussions with people who care for that activity just as much as I do.

The book is more than the text - because he has the text too should he choose to fork it as suggested. It's the site. It's the promotion.  And it is the text that is still to be written by its authors. The text that I am writing on top of the text I had been writing. I acknowledge the collaboration, but refuse to minimize my role to the past text, leaving a project I started in an unfinished state.

So when Llewellyn calls for help with a tweet
he is missing a big chunk of the picture. He is missing the fact that he is asking to be an equal author of a book that isn't yet written because he was once part of collaboration of writing some parts of early versions. He is asking his visit would infect the rest of the contributions. And that is not how attribution in a mob should work.

When someone sympathetic to Llewellyn then responds:
I disagree. I did not make this discussion more difficult. I made it clearer. Visitors don't get to claim rights to future work. Attribution of realistic contribution is necessary. Forking the text we wrote together is acceptable. Asking others in the mob to start over a fresh because you visited isn't fair.

My sense of justice says I do the right thing.

The book has 1008 readers, out of which 281 have paid for it and 1 has claimed their money back (book was not finished). We have received $1,240.24 in royalties. Since Aug 18th, I have received all royalties because that was when the book forked. The money does not come from writing the book but from promoting the book. From this "huge" financial gain, 80 % has gone to Llewellyn before to first pay back an investment we chose to make on printing early versions of the book.

I have explored my options. I chose not to remove him completely. I mention him, as it is fair. But he is not a second author of the book. I sympathize to his need to be, but he isn't.

Attribution for work performed in a mob is hard, because we want to be remembered and appreciated. I appreciate Llewellyn's contributions while sticking to the fact that my contributions are far greater and handing also my future contributions to him just because he wants to be an author is not a fair thing to ask.

Authorship is not born when you start a project. Ideas without implementation are not where attribution should be born. Authorship is born when you work on a project and finish it. If you want to say that you created X, be more than a visitor in that mob. Make space to stick around to completion. And even when you don't, they'll appreciate your visit. Because the end result is better through that visiting collaboration.

Looking forward to other visitors in finishing the book.

Wednesday, September 12, 2018

Devs Leading the Testing I Do

Dear lovely developer colleague,

I can see what you are trying to do. You try to care about testing. And I appreciate it. I really do. But when you try to care about something you don't really yet understand well, it creates these weird discussions we had today.

First I see you creating a pull request casually mentioning how you fixed randomly failing unit tests. That is lovely. But the second half of the sentence made no sense to me. While you were fixing  the randomly failing unit tests, you came to the conclusion that rewriting (not refactoring, but rewriting) a piece of logic that interfaces with the system in a way unit tests would never cover was a good idea. I appreciate how you think ahead, and care for technical quality. But did you at least  test the feature in the system context or just rely on your design working after the rewrite, based on the unit tests?

Then I initiate a constructive discussion on how our system test automation does not cover that functionality you rewrote. I'm delighted with another developer colleague pitching in, volunteering to add that to test automation right away. For a few hours, I'm walking on happy clouds for the initiative. And then I talk to you again: you tell me that while the colleague already started, you felt the need of writing them a Jira ticket of it. And because there was a Jira ticket, I no longer should pay attention to the fact that your changes are in the release candidate we are thinking of giving to customers any moment now, as soon as testing of them is done. I appreciate that you think tracking it will help make it happen. But have you paid attention on how often tracking with Jira means excuse of not doing it for now in our team? And how you assigning work to others means that fewer and fewer of us volunteer to do the things because you keep filling our task lists identifying all sorts of things everyone else could be doing and assigning them around?

The day continues, and I find two features I want to personally spend some time testing. The first one I pick up is created by yet another lovely developer. They didn't tell me it was done either, but my superpowers in spying folks pull requests reveal the right timing for the test. This is created by someone new, so I do more communication than I would do normally. And I do it visibly. Only to have you, dear lovely developer colleague again telling me how I should test it. Or rather, not test it. How there is someone else somewhere that will test it. Obviously I follow up to check that someone else has promised to do, and they haven't. Or they have promised to do something different. Like 1/10 of what needs doing. So I walk over your great plans of leading the testing I do, only to find out that the feature I tested don't work at all. So we talk about it, and you seem to agree with me that this could have been level of testing that takes places in our team. It would, in fact, be testing  that takes place not only in our team, but by the developer introducing the feature. Missing it once is fine. But not making it a habit through feedback is better. I introduce some ideas of exploratory testing on how I will make it fail, once I will first get it to pass once. You show me thumbs up, as if I needed you to approve the testing I do. I love how you care. But I wonder do you realize I care too?

So I move on to the second feature I found, and confirmed that it will not be fully tested elsewhere. And you keep coming back to correct me on what I should be doing.

Could you please stop assigning me Jira tickets that isolate testing from the pull request or the request of implementation you've worked on? Could you let me design my own work and trust that I know what I'm doing, without trying to micromanage me?

I appreciate you care, but we all care. We all have our say in this. Stop leading the testing I do, and start sharing information you have, not assigning me tasks.

I already think you're lovely. Imagine how awesome we'd be if you started believing I can do this? That we all can do this, and that you are not alone in this. That you don't need to track us, but work with us.

The Right Time for Testing

There is one significant challenge working as tester with my current team: self-allocating work to fit the right priorities. With 12 people in the team, significant responsibilities of both owning our components test automation & other testing as well as accepting work into the system we are responsible for releasing in our internal open source community, there's a lot of moving parts.

In times before agile, a common complaint from testers was that we were not allowed to join the efforts early. But now we are always there. Yet yesterday someone mentioned not getting to be in the new feature effort early enough. With many things going on, being on time is a choice one has to make personally, to let go of some of the other work and trust other people in the team (developers) can test too.

With incremental delivery, you can't really say what is "early" and what is "late". Joining today means you have a chance of influencing anything and everything over the course of improving it. There is no "it was all decided before the efforts started". That's an illusion. We know of some of the requirements. We're discovering more of them. And testers play a key role in discovering requirements.

We've been working on a major improvement theme since June. Personally, I was there to start the effort. Testing was there first. We're discussing timing, as I'm handing the "my kind of testing" responsibility to another senior, who is still practicing some of the related skills. They join later than I joined. But the right  time for all of testing is always today: never too early, never too late.

Yesterday I was looking into helping them getting into the two new areas I do regularly that they are getting started on: requirements in incremental delivery, and unit tests within current delivery. 
I made a choice by the end of the day. I will choose unit tests and fixing a new developer. The other will choose requirements and fixing future iterations.

To succeed in either, we cannot be manual testing robots doing things automation could do for us. We're in this for a greater discovery that includes identifying things worth repeating. Exploratory testing feeds all other ways of testing and developing.

The right time for testing is today. Make good choices on what you use that time on. There's plenty of choices available, and you're the one making them even when choosing to do what others suggested or commanded.

Tuesday, September 11, 2018

Tests Surviving Maintenance

As we create tests to our automation suites, we put some care and effort into the stuff we are creating. As part of the care and effort, we create a visualization on a radiator on how this test is doing. The blue (sometimes yellow/red) boxes provide a structure around which we discuss making a release with the meaning of basic things still working.

When something turns not blue and the person who created the piece of code is around, they usually go tweak it, exercising some more care and effort on top of the already sunk cost. But a repeating pattern seems to be that if the person who created the piece of code is not around, the secondary maintainer fixes things through the method of deletion.

I had a favorite test that I did not create. But I originated the need of it, and it was relevant enough that I convinced one of my favorite developers (they're all my favorite, to be honest) to create it for me. It was the first step to have a script doing something a colleague was doing over long term, just opening the application and seeing if it was still alive.

I cared for that test. So I tested what would happen if the machine died by letting the machine die. 
The test, like so many other tests, did not survive maintenance. Instead of bringing back the machine it was watching, the watcher vanished.

As I mentioned this on twitter, someone mentioned that perhaps the test was missing clarity of intent. However, I think the intent was clear enough, but debugging for a vanished machine was harder than deleting the test. Lazy won. The problem might have been lack of shared intent, where people have the tendency of maintaining other people's stuff through deletion. The only mechanism I've seen really work on shared intent is mobbing, and it has significantly improved in previous cases the chances of other people's tests surviving maintenance.

Lazy wins so easily. Failing tests that could prevent deploys - for the reason of them showing the version should not be deployed - get deleted in maintenance. It's a people issue, not a code issue.

We need blue to deploy. Are we true to ourselves in getting our tests to keep us honest? Or do we let tests die in maintenance more often than not?

Friday, September 7, 2018

Three Kinds Of Testing

This week brought me a couple of reminders of a past I wish I had left behind, but one that is still very much day to day to some other testers. This is a past of writing test cases. And when I say writing test cases, I mean the non-automated kind. The documents that help drive testing. The idea that if only we wrote them well enough, anyone could pitch in to any of the testing.

There are organizations that put careful thought into their test case documentation. I'm lucky to be in an organization that puts careful thought into their test execution with emphasis on learning.

Some weeks ago I tweeted that I donated think we need to use both automated and exploratory testing because these are not the same. With this weeks realizations, I think there is three kinds of testing.

There's the kind of testing I work with. I would call that exploratory testing. It engulfs smart use of tools, programming and even regression test automation in a frame of learning.

There's the kind of testing that test case folks work with. I would call that manual testing. It includes creation of manual procedures for testing, with emphasis on planning ahead of time not so much on learning.

And then there's the kind of testing that all too many test automation folks do. They take a manual test idea, turn it to automation so that whatever is hard is left out. They take their "designs for tests" from somewhere outside their own work.

The first kind is really the only kind. And people doing that kind of testing may identify as testers, test automation specialists, or software developers. It's not about the role, but about the mindset of learning through empirical evidence that seeks to disprove to build a stronger case for the idea that things might work after all.

Saturday, September 1, 2018

Think Testing as Fixing

"Wait, no! We have not promised THAT!", I exclaimed as I saw a familiar pattern forming.

There was an email saying we would be delivering feature Gigglemagickles for the October release. We had not really even started working on it. We had just been though our priorities and it wasn't on them. And yet someone felt certain enough to announce it.

I started tracking what had happened.

  • In June, someone asked if Gigglemagickles might work from a tester. They reported that they had ONE TEST in test automation that wasn't failing. It wasn't testing the whole thing either, but it wasn't failing. We could be carefully positive.
  • In early August, someone asked me if I thought it should be done, and I confirmed it should if we could consider not building a software-based licensing model that would enable forcing a different pricing. We did not agree we would do it, we just talked about it making sense.
  • In late August, it had emerged into emails and turned into micromanagerial Jira tickets assigned for individuals in my team telling to test Gigglemagicles for compatibility. 
  • Since it was now around, another micromanagerial command was passed from a developer for a tester to start testing it. 
  • As I brought the testers back to what needed to  be done (not Gigglemagicles) we simply added  a combination test in automation suite to know if there was a simple way of seeing Gigglemagicles did not work beyond the one test. 
What a mess. People were dropping things they really needed to do. They were taking mistakes as the new high priority order, without much of critique. 

Half an hour later with testers back on track, we found big problems on other themes that we had promised. Diversion was postponed. 

We failed again at communication. Gigglemagicles alone was probably ok as a client feature. But it depended on a backend system no one asked about or looked at. And it depended on co-existence with other features someone was now adding as a testing task for us, at a time when there was no time available. We had fallen to the developer optimism again: we don't know of implementation tasks, so let's promise it is ready. The bugs we have not yet found are not our concern. The testing undone isn't my problem. 

Let's all learn that we need to think of testing as FIXING. If we don't have things to fix after testing, we could just hand it out now. But we do, and we need to know what to fix. That's why we test. 




Thursday, August 30, 2018

The Rewrite Culture and Programming Ecosystems

With 25 years in testing and close collaboration with many software teams working with different languages, there is a particular aspect to software development that is close to my polyglot programmer heart. I love observing programming ecosystems.

The programming ecosystem is a construct around a language that shows in many ways. We see some of the aspects in jokes of how different programming languages approach things. I see it for now as a combination of at least the following:
  • programming language
  • preferred IDE
  • 3rd party tooling and use of libraries
  • community values
  • culture of what you can do (without getting in too much trouble)
Moving between ecosystems makes you pay attention to how things have shifted and moved. The bumble bees moving between ecosystems look to be advancing things within the ecosystems, trying to bring better parts from one to another.

I'm thinking of this today, because someone proclaimed yet another rewrite. They will scrap all there was because figuring it out is too difficult and it is ugly code (tm) and write it again beautiful. I just think the definition of beautiful here is "I understand it", not "We all understand it". And my tester spidey sense is tingling, because every single rewrite I've ever seen results in losing half of the features that were not evident from the code,  but held in the tester memory bank of purposes why the software exists and flows for users it is supposed to be created for.

I realized rewrite culture is part of the C++ ecosystem a little more than some of the other language ecosystems. When in Crafter community we have the discussions of generally favoring refactoring over rewriting (gradually cleaning up), suggesting that in the C++ ecosystem feels like I'm suggesting something unspoken. Knowing how much the tooling differs for example for the C# ecosystem, this all  makes more sense. Refactoring C++ is a different pain. And it makes sense to move the pain in this ecosystem more towards those who end up testing it with a lot of contextual information.

And please, don't tell me that "not all C++ developers". Enough of them to consider it a common view. And not just in this particular organization.

We become what we hang out with unless we actively exert energy into learning new patterns that contradict what comes easy.

I love programming. And in particular, I love the programming ecosystems and how it talks to me about the problems I end up finding (through paying attention) as tester.

 

Seeing Negative Space

Have you ever heard the saying: "There is no I in the TEAM"? And the proper response to it: "Yes there is, it is hiding in the A-holes". This is an illustration of the idea of negative space having a meaning, and that with the right font, you can very clearly see "i" inside the A.

I'm thinking this because of something I just tested, that made me realize I see negative space a lot as a tester.

The feature of the day was a new setting that I wanted to get my hands on. Instead of doing what a regular user would do and look at only settings that were revealed, I went under the hood to look at them all. I found the one I was looking for, set it False as I had intended and watched the application behavior not change. I felt disappointed. I was missing something.

I opened a project in Stash that is the master of all things settings. I was part of pushing for a central repo with documentation more than a year ago, and had the expectation that I might find my answers there on what I was missing. I found the setting in question with a vague hint saying it would depend on a mode of some sort, which I deducted to mean that it must be another setting. I asked, and got the name of the setting I needed with an obvious name of "feature_enabled". I wasn't happy with just knowing what I needed to set, but kept trying to find this in the master of all things settings, only to hear that since we are using this one is way 1 out of 4, I could not expect to find it here. I just need to "know it". And that the backend system encodes this knowledge, and I would just be better off if I would use the system end to end.

Instead of obeying, I worked on my model of the settings system. There's the two things that are visible, and the two things that are invisible. All four are different in how we use them.

Finding the invisible and modeling on it, I found something relevant.

It's not only what there is that you need to track, you also need to track what is there but isn't visible. Lack of something is just as relevant than presence of something when you're testing.


Tuesday, August 14, 2018

Options that Expire

When you're exploratory testing, the world is an open book. You get to write it. You consider the audience. You consider their requirements. But whatever those are, what matters is your actions and choices.

I've been thinking about this a lot recently, observing some problems that people have with exploratory testing.

There is a group of people who don't know at all what to do with an open book. If not given some constraints, they feel paralyzed. They need a support system, like a recipe to get started. The recipe could be to start with specifications. It could be to start with using the system like an end user. It could be using the system by inserting testing values of presumed impact into all the fields you can see. It could be a feature tour. It could really be anything, but this group of people want a limited set of options.

As we start working towards more full form exploratory testing, I find we are often debriefing to discuss what I could do to start. What are my options? What is possible, necessary, even right? There is no absolute answer for that question, but a seemingly endless list of ways to approach testing, and intertwining them creates a dynamic that is hard if not impossible to describe.

I find myself talking about the concept of options that expire. Like when you're testing, there is only one time when you know nothing about the software - before you started any work on it. The only time to truly test with those eyes is then. That option expires as soon as you work with the application, it is no longer available. What do you then do with that rare moment?

My rule is: try different things different times. Sometimes start with the specification. Sometimes start with talking to the dev. Sometimes start with just using it. Sometimes pay attention to how it could work. Sometimes pay attention on how it could fail. Stop then to think about what made this time around different. If nothing else, the intentional change makes you more alert in your exploration.

I've observed other people's rules to be different. Some people always start with the dev to cut through the unnecessary into the developer intent. That is not a worse way or better way, but a different way. Eventually what matters for better or worse is when you stop. Do the options expiring fool you to believe things are different than they are? Make you blind to *relevant* feedback?


Saturday, August 4, 2018

Test-Driven Documentation

I remember a day at work from over a decade ago. I was working with many lovely colleagues at F-Secure (where I returned two years ago after being away for ten years). The place was full of such excitement over all kinds of ideas, and not only leaving things on idea level but trying things out. 

The person I remember is Marko Komssi, a bit of a researcher type. We were both figuring out stuff back then in Quality Engineering roles, and he was super excited and sharing about a piece he had looked into long before I joined. He had come to realize that in time before agile, if you created rules to support your document review process, top 3 rules found find a significant majority of the issues.

Excitement of ideas was catchy, and I applied this in many of my projects. I created tailored rules for requirements documents in different projects based on deeper understanding of quality criteria on that particular project, and the rule creation alone helped us understand what would be important. I created tailored rules for test plan documents, and they were helpful in writing project specific plans instead of filling in templates. 

Over time, it evolved into a concept of Test-Driven Documentation. I would always start writing of a relevant document from creating rules I could check it against.
The reason I started writing about this today is that I realized it has, in a decade, become an integral part of how I write documentation at work: rules first. I stop to think about what would show me that I succeeded with what I intended, and instead of writing long lists of rules, I find the top-3. 

Many of the different versions I've written are somewhere on my messy hard drive, and I need to find them to share them. 

Meanwhile, you could just try it for your own documentation needs. What are 3 main rules you review a user story with? If you find your rules to be:
  • Grammar: Is the language without typos?
  • Format: Does it tell all the As I want so that ?
  • Completeness: Does it say everything that you are aware that is in and out? 
You might want to reconsider what rules make sense using. Those were a quick documentation of discussion starters I find wasteful in day to day work. 

Tests first, documentation then. Clarify your intent. 


Friday, August 3, 2018

Framing an experiment for recruiting

I was having a chat with my team folks, and we talked about how hard it is to feel connected with people you've never met. We had new colleagues in a remote location, and we were just about to head out to hang out and work together to make sure we feel connected. My colleagues jump into expressing the faceless nature of calls, triggering me to share a story on what I did this summer.

During my month of vacation, I talked to people over video, each call 15 minutes, about stuff they're into in testing. I learned about 130 topics, each call starting with the assumption that everyone is worthwhile yet we need some way of choosing which 12 end up on stage. Majority of them I talked face to face, and I know from past years that for me it feels as if we really met when we finally have a chance to physically be in the same space.

This is European Testing Conference Call for Collaboration process, and I cherish every single discussion the people have volunteered to have with me. These are people and everyone is valuable. I seek stories, good delivery and delivery I can help become good because the stories are worth it for the audience I design for.

This story lead to to say that I want to try doing our recruiting differently. I want to change from reading a CV and deciding if that person is worth talking to into deciding that if they found us worthwhile of their time, they deserve 15 minutes of my life.

I've automated the scheduling part, so my time investment with experience of 2 years and 300 calls is that I know how to do this without it eating away my time. All I need is that 15 minutes. I can also automate the rules of when I'm available for having a discussion and leave scheduling for the person I will talk to. 

So with the 15 minutes face time, what will we do? With European Testing Conference we talk about talk ideas and contents of the talks. With recruitment process, we test for testers, and we code for programmers. And my teams seem to be up for the experiment!


My working theory is that we may be losing access to great testers by prematurely leaving them out. Simultaneously, we may be wasting a lot of effort in discussing if they should make it further in out processes. We can turn this around and see what happens.

Looks like there is a QE position I could start applying this next week! And a dev position we can apply it in about a month.

Meeting great people for 15 minutes is always worth the time. And we are all great - in our own different unique ways.

Wednesday, August 1, 2018

Seeing what is Right when Everyone Seems Happy with Wrong

I'm uncomfortable. I've been postponing taking this topic up in a blog post, convinced that there must be something I'm missing. But it may be that there is something I'm seeing that many others are not seeing.

A few months ago, I volunteered at work to try out crowdtesting on my product. It wasn't the first try of crowdsourcing, quite the contrary as I learned we were a happy user of one of those services for some other products.

The concern I had surprised everyone, providing a perspective no one else seemed to think about. I was concerned if using a service like this would fit my ideas of what is ethical. I was not focused on the technicalities and value and usefulness of the service but whether using the service was right. It isn't obviously wrong because it is legal. Let's look a little more where I come from.

I live in Finland, where we have this very strongly held idea that work needs to be paid for. Simultaneously, we are a promised land of non-profit organizations that run on volunteer work having  huge numbers of organizations like that in relation to number of people. But the overall structure of how things are set up is that you don't have free labor in companies.

The unemployed and the trainees get paid for the work they are doing. And this forces companies to be on good behavior in the social structures and not leech of the less fortunate.

So if my company in general wants someone to do testing, they have three options.

1) Customers Test
They mask it as "no need to test" and force their end users to test, and pay for structures that enable them to figure out what a normal user is actually complaining on, and structures for fixing things fast when the problems hit someone who not only raises their voice but walks out with a relevant amount of money.

2) Someone in their employee list tests
They pay someone for some of the testing, and some companies pay testers to get more of the whole of testing done. If you tested half of what you needed, you still tested. And this is the narrative that makes the current programmers test it all trend discussions so difficult in practice. There's testing and there's testing. It takes a bit of understanding to tell those two apart.

3) Someone in a service provider organization tests
They pay for a vendor for doing some testing. The vendor is an abstraction, a bubble where you conveniently allocate some of your responsibilities, and in many cases you can choose to close your eyes at the interface level.

Crowdtesting belongs to the third option, and relies in my view on closing your eyes at the interface level.  Crowdtesting is the idea of paying a company for a service of them finding you a crowd. They find the crowd by paying those testers, but I'm not convinced that the models of paying represent what I would consider fair, right and ethical.

So, I pay a legitimate company 5000 euros for them to do some testing we agree on under the label "crowdtesting". Yay! Is that really enough of thinking from my part? You get 30 testers with that, so cheap! (numbers are in scale, but not actuals). They even promise most of them will be in Finland and other Nordic countries. If your alarm bells aren't ringing, you are not thinking.

The more traditional companies producing things like coffee or clothes know painfully well it isn't. You really don't want your fashion brand be associated with child labor, inhumane working conditions or anything other nasty like that. Surely you can save loads of money using companies where their delivery chain hides the unethical nature of how the service is created, but you risk a reputation hits. Because someone hopefully cares.

Crowdtesting companies are not in the business of child labor, at least to my knowledge. But they are in the business of forcing my people (testers) away from being paid for their work, to be paid for their results. And with results being bugs and checkmarks in test cases, it's actively generating a place where folks in these programs are not necessarily paid for the work they do.

The way the monthly fee is set up for the customer makes this worse. There's a limit on how much of these paid checkmarks you can allocate a month, but you're told you're allowed to ask for unlimited exploratory testing  on top of that. The financial downside for the people doing testing in the low end of this power dynamic is that they are only paid for bugs customer accepts.

Many people seem really happy with doing testing in these schemes.

The ones promoted to the middle management layer get paid for their hours, leaving even less of the financial cake for the low end of the power dynamic. But why would they care, they get paid.

The ones privileged enough to not really need a job that pays you a salary get paid whatever, but since the money never mattered, why should they care? This is a hobby on the side anyway.

The ones living in cheaper countries getting paid the same amount as the testers from the Nordics may actually make a decent amount of money in finding problems, and might not even have a choice.

The really good ones who always find problems can also get paid, as long as the effort of finding something relevant isn't getting too high.

But the ethical aspect that I look at is local. The low end of the power dynamic are the testers kicked out of their projects who no longer need them, but actually do. They just no longer want to pay for all of their work and hide this fact in the supply chain.

Maybe I should just think more global. I don't know. But I am uncomfortable enough to step away. I don't want to do this.

With the same investment, I can get someone who works close to the team and provides more value. Without being exploited. With possibilities of learning and growing. Being appreciated.

And I say this looking from a higher end of that dynamic. I'm not losing my job in this shift.

I don't like crowdsourcing. Do you?