Thursday, May 9, 2019

Personal Busy Lists and Skills to Manage Them

I'm a compulsive recovering list maker. I used to be worried about forgetting a great idea for a blog, unless I immediately wrote it down. After all, it felt important at the time, and having a great idea go to waste because of my forgetful nature would be such a waste. So I wrote it down. I wrote hundreds of them down. And whenever I was thinking of writing a blog post, instead of writing that post I would prioritize my list of ideas until I was out of time.

We people can remember only a handful of things at a time. In the late 70s, research pointed out that our "human envelope", ability to remember things was 7 ± 2. Ten years later, the envelope had shrunk, and repeated studies identified we could remember 5 ± 2 things. Latest repeated study I have got my hands on is from 2001 and reports our envelope as 4 ± 1 things. Sometimes I wonder if we today can even hold one thing in our mind.

So we write lists. Particularly, we write lists of things we need to remember to do at work. Sometimes, those lists are private and structured in various different ways. And more often than not, we create a place for our shared lists with issue trackers.

For my work related lists of testing work, I've grown aware that there are two kinds of work.

  •  Things you do until they're done (open ended)
  •  Things you do for a particular time to declare them done (time-boxed)

Best way to declare testing done is to time-box it. Give me 5 minutes, and I'm done. Give me five days, and I will be done too. Whatever you do, you do so that you assume your time is running short and the work you do may be all the work you got to do - prioritize the important first so that when you're out of time, your most important work has been done.

My lists of testing work change so much, that I quite prefer not splitting them into lists of testing work in a shared structure that reflect my real understanding. Learning changes my plan and I want to be able to throw away the old when it is time to bring in the new. List making habit won't help me with that.

Recently I've been trying to pay attention to different ways people create and manage their personal busy lists, and how that reflects on what they ask of others. I try to remind myself of treating others as I would like to be treated extended with caring how they actually want to be treated over pushing my definition. I very often find I fail with the corollary principle, but still feel this self-righteous joy when I believe others are not yet there even with the first principle of not asking as much (or more) from theirselves.

I believe I cannot ask others to work only on issues from Jira since I never do that.

I believe I cannot ask others to put all the work they do in Jira as I would never do that.

I believe I should not think that having a ticket in Jira assigned to a person actually means they have accepted the work, can do the work and that the work is progressing as I wish without speaking about it.

I believe that when people make wrong choices on priorities and do less important things first, missing the things they really should be paying attention to, there is always a reason. Accepting human reasons like need to sprinkling joyous tasks around to make the boring bearable should be default.

Sometimes, our personal busy lists grow to lengths we can no longer manage. When everyone has a personal busy list, the attempts to try and share the important work on our own list becomes overwhelming.

Practicing the skills

1. Share the top

When you are working from a list and you have people around you, instead of sharing the whole list, share the topmost items. Share what you're working on now. Implicitly, you're sharing what you are not working on now, because if it isn't on the top and you are working on the top, the rest of it waits.

2. Seek to understand priorities

When you choose what bubbles up for you, you need to understand the environment you're working in. What defines urgent and important? If you have 7 things someone pushed towards your personal busy list with their wishes and hopes, you'll need to be making choices on what happens (now) and what waits.

With continuous releasing, being away from office for a day can mean that whatever was on your list has been moved on someone else's. Learn a way to track priorities.

3. Keep list short

When you let lists grow even if you don't share them to a common standard, the list organizing can take up significant effort. Reflect on how that is serving you. Find a way of making a part of your long list your short list. And check back if the long list was really something you needed.

4. Share to learn

When you tell others what is on your list, others will share things inspired by your list. It might be on priorities. It might be on how they would like to help you. It might be on what more they'd appreciate as your contribution. Take sharing as a learning experience.

What other rules to manage your busy lists do you have? Think about it, and let me know. 

Wednesday, May 8, 2019

The Scout Rule on Tickets

I've never been a scout, yet all of us regardless of level involvement know the scout rule: You should always leave places you visit cleaner than they were when you found it.

The rule works on code, to remind us on continuous refactoring and allowing our accrued knowledge drive us to make changes to things that were good enough with yesterday's knowledge.

While us spending time with the code is what ends up shipped and changing the life of the real users, code is far from the only thing we end up creating.

For a long time, I have been particularly fascinated with the tickets of bugs, issues, and work that we log on tools like Jira. I've been looking into them in more detail, realizing that they can sometimes be helpful, but more often they are a liability: an excuse to not deal with things, a way to externalize ideas with a significant cost and something any tester is so routinely trained to pay attention to that the change in this requires a whole rewiring of how we deal with things in teams.

I've spent significant personal energy on not writing tickets but instead having face to face discussions. I find myself still a recovering ticket addict, who very easily rather writes a ticket than risks forgetting things, even if the risk of forgetting is what changes the fixing behavior.

Today, I looked at all the tickets I have written while working at F-Secure. I learned that I have logged 748 tickets of all sorts since I started there. Overall, I have now six years of F-Secure service behind me, 3 years ten years ago and now 3 years after my absence. That is 125 tickets for a year of service. Assuming each ticket has taken me 10 minutes to create, that is 20 hours of my life a year.

However, the interesting bit today was not how much of my life I have invested into writing those, or even if that time could have been spent in a better way. Nor was it how the profile  has changed since the days when I did not avoid writing Jira tickets. The interesting bit was to look at the possible leftovers I had left.

The scout rule applied to tickets would say that


  •  If I took the time to report, I should make sure that there is a decision or a fix on the issue I carefully crafted. 
  •  I should clean up the trail I leave, and not leave tickets lying around. 

Out of the 748 tickets, 53 were open today. Only a few of them are things that I actively follow through right now. Most of them were things I had written down, assigned somewhere and forgotten. The oldest issue still open and lying around, unused was dated April 8th, 2005.

Getting a realistic picture of what work is there and what has been dealt with relies on someone cleaning up. A good tester isn't one that reports the most issues, but the one that gets the relevant issues found and fixed. Leaving things messy and tickets lying around isn't serving the information goals of testing, but it is a symptom of me not carrying through my now-understood-clearer-than-in-the-past purpose.

Remembering the scout rule on tickets - my own in particular but people around me in addition, would serve my purpose better.

Comment from Ali Hill at Patreon:

This is really interesting to me too.. The 'recovering ticket addict' comment is especially true for me. I started off testing in a place where you were expected to log 5 bug tickets a day. It took me a while to move away from that mentality, but like you, I tried to have more face to face discussions about issues I'd found. I found that I had more success in getting bugs fixed there and then. I actually set myself a rule that if a bug ticket wasn't going to be fixed this sprint, or next sprint, then I wasn't going to bother raising it anymore.

I see the value in tools like Jira for teams who aren't sitting in the same office, but I've had more success using physical Kanban boards on whiteboards with my teams in the past. People are more engaged with the 'tickets' as they're right there in front of them. The problem is that management need reports, so we were essentially duplicating our task tracking.

Untidy Jira boards really annoy me, so your post has struck a chord, thanks for writing!



Thursday, May 2, 2019

Trap of Skipping Releases

It was the third day of making a release, and it felt like everything was going wrong. More like everything was harder than it should be. We had covered plenty of scenarios in our testing efforts, extended the test automation that we can run against the production environment and yet everything felt like an uphill battle.

To top the frustration, just as we were completing the last of the last steps, one of us found a showstopper issue we all immediately agreed on we could not release with.

What happened? The team that so fluently, with moderate effort and no pain was making releases, had regressed into this frustrating pile of individuals that failed at making a release - while succeeding in doing the right thing and caring for the millions our release would impact.

Looking back, we fell into a usual trap we should know to avoid. The trap of not releasing often, and all the time. Instead, we collected 600 pull requests into one release. There was no way of making a release of that size that would be low risk, no matter what we were doing.

The previous release included a design mistake we learned we had as we made the release. Due to the design that was very much intentional even if misinformed, many of the users could not get to the latest version. They weren't particularly bothered, we were. And as an end result, we  stopped releasing until we could figure out a new design on that particular problem.

Making a big release brought out the worst of us:

  • We had trouble communicating on who does what and simple things were done many times whereas other things were not done on time
  • We run test automation late, as we were struggling to get it to catch up with all the changes the product included. 
  • Finding a problem, we asked why did the others miss it and turned into a blaming crowd
  • We started passing tasks away from our own lists to others, and created assumptions of completion just by assigning it
  • We started growing the scope as making it available was taking time. Since it will be tomorrow perhaps we could include this one more thing - all contributing to the frustration. 
  • Doing the worst in some years is, in many ways, a healthy reminder. It reminds us on what we had with continuous releases, and how important that it. It aligns us on the goal of maintaining continuous releases. 

And it could be worse. It could be that this wasn't an internal frustration and a reminder, but would reflect poorly on our users. Hoping for the best, fearing for the worst and ready to take on whatever the project life brings us. 

Looping Learning to Exploratory Testing

Testing is exploratory to the degree we are learning. If we do something, learn about it and our lessons learned have an impact on what we end up doing next, we're observing exploratory testing. Looking at the system under test as our external imagination, we don't only learn about it but also about ourselves. 

One system that teaches me different things every time I bring a group together to test it is a little electronic game, Boggle Cubes. You probably know the big sibling of the game, Scrabble. You build words form letters and get points for whatever words you managed to get together. 


For a session of exploratory testing, I often split the work into three pieces we debrief, 20 minutes of testing each. I have three sets of the game, and as I pass a set to people testing, it has three pieces: the black box it all comes in, the cubes you can play with and the documentation, namely Finnish/Swedish instructions. 

With a small group, I had all sets to them and allow them to self-organize into how they test. With a larger group, I divide them into subgroups each working with one set. 



With small groups, sometimes people choose to work solo, sometimes they pair. For this particular session, the tree people, my colleagues at work, ended up with a nice mix. What did we learn together?

Lesson I. Testing is not using the system. You need to be more driven to information and understanding. More intentional

A user can spend hours with this system we were testing, and come out with "It was fun" and "I finally got full points, success!". Testing a game, they end up playing the game missing their information objective: does it work? does it have something to discuss that could be of concern on how its built and what it does? A tester goes in and builds a model: what features are there? what perspectives can I choose from and what will I focus on first? 

Lesson II. Intent is not enough. Serendipity happens through elements of play and making unplanned connections. Awareness of what you know and how you know it becomes essential. 
At first, you'll have an idea on how you learn and what you look at. Then something surprising happens, like someone takes away on of your five cubes and you learn you can still play the game. You say something, and the others realize they had not thought of things you were seeing. Using gives luck a chance to reveal information we did not plan to know but welcome as it emerges. Serendipity is lucky accident, that happens to those who dedicate time to keeping their senses awake. 

Lesson III. You are in control. When the system gives you a hard constraint, there are still ways you can work through that constraint. If you can simplify or isolate, do that. Complex can happen later

With five letters, there's quite a number of possible permutations that could form new words. And every time the game starts, it assigns seemingly random collection of five you have no control over. Not the easiest thing to test, always under a time constraint and need of starting anew. Realizing you could test scoring with four letters making it simpler is relevant. It does not save you from testing five, but already gives you a more contained idea of how that might work. 

Lesson IV. Testability. When something is hard to do, there is a developer somewhere who needs to share the pain to build smarter next time

The random collection of starting letters does not make it easy to test the correctness of scores. When testing something is harder and you  got ideas on how it could be simpler, you are recognizing a testability feature. Control over the starting position would help testing these. Control over identifiers helps test web user interfaces. Best way to get the stuff that helps you is adding it yourself. Second best is to share the pain with someone who can help and often comes with less patience to test the hard way than you might have: there is a developer somewhere that needs to experience the difficulties, knowing they could change it. 

Lesson V. You got tools. Find them. You pull in what you need, not rely on just what you're given. 

With four or five letters, you can create permutations. You don't need to try doing this in your head only, writing your inputs down on paper seems like a smart thing to do. As soon as you write them down, having all permutations available seems like a good thing to have. Google search can help you, even if no one mentioned you were allowed to use it. Any tools you find useful are at your disposal when you test. Not just whatever you were given. 

Lesson VI. Yes, that automation of permutations integrated with vocabulary would come in handy. What you can't find but want, you can build

Some tools you want won't exist. But you can always create tools you want. Create an idea, and take time to build tools that help you test. They could be reusable or disposable. They may do setup for you, or testing for you. In this case, they could help you recognize the real words that you should get points on. 

Lesson VII. It really does not calculate scores right. But does that matter - what is quality for this game? What creates emotions relevant enough to impact user’s behavior?

With all that testing, if you're successful you realize your testing was limited in scope. If you focused on scores, you should know this: it does not work. You can sometimes get a full score for a specific set of letters. Most of the time that is not possible. There are legitimate words it does not recognize. But does that matter? For this game, we always have fun just playing it. What problems would be relevant enough for you to move to take the game back to store and claim your money back? 
This round of gamified testing helped us learn about the system we were testing, but also about testing and how we could approach it. Do you still learn every day as you are testing? 

Wednesday, May 1, 2019

Never Be Bored

There are some personal guidelines that have served me well on my career with software testing and creation. While I teach Be Lazy to new programmers to avoid mistakes through allowing IDEs to do some of the basic lifting for them, my main thing to teach new testers is the idea of Never be Bored.


Imagine a day at the office, testing the same software you've tested before. To get into the system, you need to log in. To get started with testing the functionality, you need some actions to set the state and data just right. You keep repeating the moves, if not every day, quite often. How could you not be bored?

Let's face it. Repetition is boring. If you feel you're bored, it should be a trigger telling you that you're not doing what you could be doing. Never repeat exactly. Keep yourself awake, thinking of what could you vary.

You need to log in. But why would you need to log in with the same user account every time? What if, just to spice things up, you'd create a new one? You need the state and data right for your test. What if, just today, you would create different set of data? What if you, just today, created a script to do this, and varied that script sometimes when you feel the need of doing something different? You need to get to the right functionality, and it requires these moves you need to do. But what if you did something before, something while getting there, and something after?

Variables are everywhere, and the idea of Never Be Bored helps figure out when you get stuck in your ways, the steps through a minefield that minimize your chance of a lucky accident of finding something new.

In the intersection of automation and exploratory testing, we approach the things that make us bored a little differently. Looking at things primarily as exploratory tester, not being bored means different paths as repeating narrows the coverage we can achieve overall. Looking at things primarily as test automation specialist, not being bored means that since there could and should be repetition, at least we don't personally have to be around for all of it. For great testing, we're intertwining these perspectives for a great balance.



Having both perspectives intertwined works great on the Never Be Bored heuristic. It makes our work more versatile, and our results more relevant and sustainable.