Friday, January 16, 2026

The Results Gap

Imagine you are given an application to test, no particular instructions. Your task, implicitly, is to find some of what others have missed. If quality is great, you have nothing to find. If testing done before is great, none of the things you find surprise anyone. Your work, given that application to test is figure out that results gap, and if it exists in the first place.

You can think of the assignment as being given a paper with text written in invisible ink. The text is there, but it takes special skill to turn that to visible. If no one cares what is written on the paper, the intellectual challenge alone makes little sense. Finding some of what others have missed, of relevance to the audience asking you to find information is key. Anything extra is noise.

Back in the days of some projects, the results gap that we testers got to work with was very significant, and we learned to believe developers are unable to deliver quality and test their own things. That was a self-fulfilling prophecy. The developers "saving time" by "using your time" did not actually save time, but it was akin to a group of friends eating pizza and leaving the boxes around, if someone did not walk around pointing and reminding of the boxes. We know we can do better on basic hygiene, and anyone can point out pizza boxes. It may be that there is other information everyone won't notice, but one reminder turned to a rule works nicely on making those agreements in our social groups. With that, the results gap got to be the surprises.

Results gap is space between two groups having roughly the same assignment, but providing different results. Use of time leads to the gap, because 5 minute unit testing and 50 minute unit testing tend to allow for different activity. Availability of knowledge leads to the gap, because even with time you might not note problems without a specific context of knowledge. Availability to production like environments and experiences leads to the gap, both by not recognizing what is relevant for the business domain but even being able to see it due to missing integrations or data.

Working with the results gap can be difficult. We don't want us using so much time on testing that was already someone else's responsibility. Yet, we don't want to leak the problems to production, and we expect the last group assigned responsible to testing to filter out as much of what the others missed as possible. And we do this best by sizing the results gap, and making it smaller, usually through coaching and team agreements.

For example, realizing that by testing and reporting bugs, our group was feeding the existence of the results gap lead to a systemic change. Reporting bugs by pairing to fix them helped fix the root cause of the bugs. It may have been extra effort on testing on our group, but saved significant time in avoiding rework.

Results gap is a framing used for multiple groups agreed responsibilities towards quality and testing. If no new information surprises you production time, your layered feedback mechanisms bring you good enough quality (scoping and fixing enough) with good enough testing (testing enough). Meanwhile, my assignments as a testing professional are framed in contemporary exploratory testing, where I combine testing, programming and collaboration to create a system of people and responsibilities where quality and testing leaves less of a results gap for us to deal with.

Finally, I want to leave you with this idea: bad testing, without results, is still testing. It just does not give much of any of the benefits you could get with testing. Exploratory testing and learning actively transforms bad testing to better. Coverage is focused on walking with potential to see, but for results, you really need to look and see the details that the sightseeing checklist did not detail.

Tuesday, January 6, 2026

Learning, and why agency matters

Some days Mastodon turns out to be a place of inspiration. Today was one of those. 

It started with me sharing a note from day-to-day at work, that I was pondering on. We have a 3 hour Basic and Advanced GitHub Copilot training organized at work that I missed, and I turned to my immediate team asking 1-3 insights of what they learned as they were at the session. I knew they were at the session because I had approved hours that included being in that session. 

I asked as a curious colleague, but I can never help being also their manager at the same time. The question was met with silence. So I asked a few of the people one on one, to learn that they had been in the session but zoned out for various reasons. Some of the reasons included having hard time to relate to the content as it was presented, the French-English accent of the presenters, getting inspired by details that came in too slow taking time to search information online on the side and just that the content / delivery was not particularly good. 

I found it fascinating. People take 'training' and end up not being trained on the topic they were trained on, to a degree they can't share one insight the training brought them. 

For years, I have been speaking on the idea of agency, sense of being in control, and how important that is for learning-intensive work like software testing. Taking hours for training and thinking about what you are learning is a great way of observing agency in practice. You have a budget you control, and a goal of learning. What do you do with that budget? How do you come out having used that budget as someone who know has learned? It is up to you.

In job interviews when people don't know test automation, they always say "but I would want to learn". Yet when looking back at their past learning in space of test automation, I often find that the "I have been learning in past six months" ends up meaning they have invested time in watching videos, without having being able to change anything in their behaviors or attain knowledge. They've learned awareness, not skills or habits. My response to claims of learning in the past is to ask for something specific they have been learning, and then asking to see if they now know how to do it in practice. Most recent example in this space was me asking four senior test automator candidates on how to run robot framework test cases I had in IDE - 50% did not know how. We should care a bit more about our approaches to learning in terms of it is impactful. 

So these people, now including me, had the opportunity of investing 3 hours to learning GitHub Copilot. Their learning approach was heavily biased on the course made available. But with strong sense of agency, they could do more.

They could:

  • actively seek the 1-3 things to mention from their memories 
  • say they didn't do the thing and in the same time they did Y and learned 1-3 things to mention
  • not report the hours into training even if the video was playing while they did something completely unrelated
  • stop watching the online session and wait for video to have control over speed and fast-forwarding to relevant pieces
  • ...

In the conversations on Mastodon, I learned a few things myself. I was reminded that information intake is a variable I can control from high sense of agency in my learning process. And I learned there is a concept of 'knowledge exposure grazing' where you are snacking information, and it is a deliberate strategy for a particular style of learning. 

Like with testing, being able to name our strategies and techniques allows us control and explainability to what we are doing. And while I ask as a curious colleague / manager, what I really seek is more value for the time investment. If your learning teaches others in a nutshell, you are more valuable. If your learning does not even teach you, you are making poor choices. 

Because it's not your company giving you the right trainings, it's you choosing to take the kinds of trainings in the style that you know works for you. Through experimentation you learn what are the variables you should tweak. And that makes you a better learner, and a better tester. 



 

Saturday, January 3, 2026

The Words are a Trap

Someone important to me was having a bad day at work, and send me a text message to explain their troubles. Being in a completely different mindspace working on some silly infographic where the loop to their troubles may exist but comes with longer leash than necessary, instead of responding to what I had every change of understanding, I send them the infographic. They were upset, I apologized. We are good.

No matter how well we know each other, our words sometimes come off different than our intentions. Because communication is as much saying and meaning as it is hearing and understanding. 

Observing text of people like those who are Taking testing! Seriously?!?  and noting the emphasis they put on words leaves me thinking that no matter how carefully they choose their words, I will always read every sentence with three different intentions because I can control it, they can't. Words aren't protected by definitions, but they are open to the audience interpretation. 

I am thinking of this because today online I was again corrected. I should not say "manual testing", the kind of poor quality testing work that I describe is not testing, it's checking. And I wonder, again, why smart people in testing end up believing that correcting the words of majority leads to them getting the difference between poor quality and good quality testing, and factors leading up to it. 

A lot of client representative I meet also correct me. They tell me the thing I do isn't testing, it's quality assurance. Arguing over words does not matter, the meaning that drives the actions matters. 

Over my career I have been worried about my choice of words. I have had managers I need to warn ahead of time that someone, someday will take offense and escalate, even out of proportion. I have relied on remembering things like in 'nothing changes if no one gets mad' (in Finnish: mikään ei muutu jos kukaan ei suutu - a lovely wordplay). Speaking your mind can stir a reaction that silence avoids. But the things I care for are too important to me to avoid the risk of conflict. 

I have come to learn this: words are trap. You can think about them so much you are paralyzed from taking action. You can correct them in others so much that the others don't want to work with you. Pay attention to be behaviors, results, and impacts. Sometimes the same words from you don't work, but from your colleague they do. 

We should pay attention more to listening, maybe listening deeper than the words, for that connection. And telling people that testing is QA or that testing is checking really don't move the world to a place where people get testing or are Taking testing... Seriously.