Friday, July 23, 2021

Ensemble Programming as Idea Integration

This week with Agile 2021 conference, I took in some ideas that I am now processing. Perhaps the most pressing of those ideas was from post-talk questions and answers session with Chris Lucian where I picked up the following details about Hunter ensemble (mob) programming:

  • Ideas of what to implement generally come to their team more ready than what would be my happy place - I like to do a mix of discovery and delivery work and would find myself unhappy with discovery being someone else's work. 
  • Optimizing for flow through a repeatable pattern is a focus: from scenario example to TDD all the way through, and focus on the talk is on habits as skill is both built into a habit and overemphasized in the industry
  • A working day for a full-time ensemble (mob) has one hour of learning in groups, 7 hours of other work split to a timeframe of working in rotations, pausing to retrospect and taking breaks. Friday is a special working day with two hours of learning in groups. 
The learning time puzzled me in particular - it is used on pulling knowledge others have, improving efficiency and looking at new tech. 
A question people seem to ask a lot about Ensemble Programming (and Testing) is if this would be something we do full-time and that is exactly what it is as per accounts from Hunter that originated the practice. Then again, with all the breaks they take, the learning time and the continuous stops for retrospective conversations, is that full time? Well, it definitely fills the day and sets the agenda for people, together. 

This lead me to think about individual contributions and ensembling. I do not come up with my best ideas while in the group. I come up with them when I sit and stare a wall. Or take a shower. Or talk with my people (often other than the colleagues I work with) explaining my experience trying to catch a thought. Best work-related ideas I have are individual reflections that, when feeling welcome, I share with the group. They are born in isolation, fueled by time together with people, and implemented, improved and integrated in collaboration. 

Full 8 hour working days with preset agenda would leave thinking time to my free time. Or making a change in how the time is allocated so that it fits. With so much retrospectives and a focus on kindness, consideration and respect, things do sound like negotiable when one does not fold under group's different opinions. 

I took a moment to rewatch amazing talk by Susan Cain on Introverts. She reminds us: "Being best talker and having best ideas has zero correlation.". However, being the worst talker and having the best ideas also has zero correlation. If you can't communicate your ideas and get others to accept them and even amplify them, your ideas may never see the light of day. This was particularly important lesson for me on Ensemble Programming. I had great ideas as a tester who did not program, but many - most - of my ideas did not see the light of day.

Here's the thing. In most software development efforts, we are not looking for the best ideas absolutely. But it would be great that we don't miss out on the great ideas we have in the people we hired just because we don't know how to work together and hear each other our.

And trust me - everyone has ideas worth listening to. Ideas worth evolving. Ideas that deserve to be heard. People matter and are valuable, and I'd love to see collaboration as value over competitiveness.

Best ideas are not created in ensembles, they are implemented and integrated in ensembles. If you can’t effectively bind together the ideas of multiple people, you won’t get big things done. Collaboration is aligning our individual contributions while optimizing learning so that the individuals in the group can contribute their best. 








Tuesday, July 20, 2021

Mapping the Future in Ensemble Testing

Six years ago when I started experimenting with ensemble testing, one of the key dynamics I set a lot of experiments around was *taking notes* and through that, *modeling the system*. 

At first, I used that notetaking/modeling as a role in an ensemble, rotating in the same cycle as other roles. It was a role in which people were lost. Handing over document that you had not seen and trying to continue from it was harder than other activities, and I quickly concluded the notes/model were something that for an ensemble to stay on common problem, this needed to be shared. 

I also tried a volunteer notetaker who would continuously be describing what the ensemble was learning. I noticed a good notetaker became the facilitator, and generally ended up hijacking control from the rest of the group by pointing out in a nice and unassuming way what was the level of conversation we were having. 

So I arrived at the dynamic I start with now in all ensemble testing sessions. Notetaking/modeling is part of testing, and Hands (driver) will be executing notetaking from the request of the Brains (designated navigator) or Voices (other navigators). Other navigators can also keep their own notes of information to feed in later, but I have come to understand that in a new ensemble, they almost never will, and it works well for me as a facilitator to occasionally make space for people to offload the threads they model inside their heads into the shared visible notes/model. 

Recently I have been experimenting with yet another variation of the dynamic. Instead of notes/model that we share as a group and use Hands to get visible, I've allowed an ensemble to use Mural (post-it wall), on the background to offload their threads with a focus on mapping the future they are not allowed to work on right now because of the ongoing focus. It shows early promise of giving something extra to do for people who are Voices, and using their voice in a way that isn't shouting their ideas on top of what is ongoing but improving something that is shared.

Early observations say that some people like this, but it skews the idea of us all being on this task together and can cause people to find themselves unavailable for the work we are doing now, dwelling in the possible future. 

I could use a control group that ensemble together longer, my groups tend to be formed for a teaching purpose and the dynamics of an established ensemble are very different to the first time ensemble. 

Experimenting continues. 



Wednesday, July 14, 2021

Ensemble Exploratory Testing and Unshared Model of Test Threads

When I first started working on specific adaptations of Ensemble Programming to become Ensemble Testing, I learned that it felt a lot harder to get a good experience on exploratory testing activity than on a programming activity, like test automation creation. When the world of options is completely open, and every single person in the ensemble has their own model of how they test, people need constraints that align them. 

An experienced exploratory tester creates constraints - and some even explain their constraints - in the moment to guide the learning that happens. But what about when our testers are not experienced exploratory testers, nor experienced in explaining their thinking? 

When we explore alone, we start somewhere, and we call that start of a thread. Every test where we learn creates new options and opportunities, and sometimes we  *name a new thread* yet continue on what we were doing, sometimes we *start a new thread*. We build a tree of these threads, choosing which one is active and manage the connections that soon start to resemble a network rather than a tree. This is a model that guides our decisions on what we do next, and when we will say we are done. 

The model of threads is a personal thing we hold in our heads. And when we explore together in ensemble testing, we have two options:

  1. We accept that we have personal models that aren't shared, that could cause friction (hijacking control) 
  2. We create a visual model of our threads
The more we document - and modeling together is documenting - the slower we go. 

I reviewed various ensemble testing sessions I have been facilitating, and noticed an interesting pattern. The ensemble was more comfortable and at ease with their exploratory testing if I first gave them a constraint of producing visible test ideas before exploring. At the same time, they generally found less issues to start conversations on, and held stronger to the premature assumptions they had made of the application under test. 

Over time, it would be good for a group to create constraints that allow for different people to show their natural styles of exploratory testing, to create a style the group shares. 

Wednesday, July 7, 2021

Working with Requirements

Long, long time ago I wrote down a quote that I never manage to remember when I want to write it down. Being on my top-10 of things I go back to, I should remember it by now. Alas, no. 

"If it's your decision to make, it's design. If it's not, it's a requirement." - Alistair Cockburn

Instead, I have no difficulties in recalling the numerous times someone - usually one of the developers - says that something *was not a requirement* is overwhelming. With all these years working to deliver software, I think we hide behind requirements a lot. And I feel we need to reconsider what really is a requirement. 

When our customers ask us of things they want in order to buy our solution, there's a lot of interpretation around their requirements. I have discovered we do best with that interpretation when we get to the *why* behind the *what* they are asking, and even then, things are negotiable much more often that not. 

In the last year, requirements have been my source of discontent, and concern. In the first project we delivered together, we had four requirements and one test case. And a thousand conversations. It was brilliant, and the conversations still pay back today. 

In the second project we delivered together, we had more carefully isolated requirements for various subsystems, but the conversation was significantly more cumbersome. I call it success when 90% of the requirements vanished a week before delivery, while scope of the delivery was better than those requirements let us to believe. 

In another effort in the last year, I have been going through requirements meticulously written down and finding it harder because of the requirements to understand what and why we are building.

Requirements, for most part of them, should be about truly the decisions we can't make. Otherwise, let's focus on designing for value. 


Thursday, July 1, 2021

Learning while Testing

You know how I keep emphasizing that exploratory testing is about learning? Not all testing is, but to really do a good job of exploratory testing, I would expect centering learning. Learning to optimize value of our testing. But what does that mean in practice? Jenna gives us a chance of having a conversation on that with her thought experiment: 

When I first came about Jenna's thought experiment, I was going to pass it. I hate being on the spot with exercises where the exercise designer holds the secret to what you will trip on. But then someone I admire dared to take the challenge on in a way that did not optimize for *speed of learning* and this made me wonder what I would really even respond. 

It Starts with a Model

Reading through the statement, a model starts to form in my head. I have a balance of some sort that limits my ability to withdraw money,  a withdrawal of some sort that describes the action I'm about to perform, an ATM functionalities of some sort, and a day that frames my limitation in time. 

I have no knowledge on what works and what does not, and I don't know *why* it would matter that there is a limit in the first place. 

The First Test

If I first test a positive case - having more than that $300 on my balance, withdrawing $300 expecting I get the cash at hand and then any smallest sum on top of that that the functionalities of the ATM allow for, I would at least know the limit can work. But that significantly limits anything else I can learn then on the first day. 

I would not have learned anything about the requirement though. Or risks, as the other side of the requirement. 

But I could have learned that even the most basic case does not work. That isn't a lot of learning though. 

Seeing Options

To know if I am learning as much as I could, it helps if I see my options. So I draw a mindmap. 

Marking the choices my 1st test would be making makes it clear how I limit my learning about the requirements. Every single branch in itself is a question of whether that type of a thing exists within the requirements, and I would know little other than what was easily made available. 

I could easily adjust my first test in at least giving myself a tour of the functionalities the ATM has before completing my transaction. And covering all ways I can imagine going over after that first transaction getting me to limit would lift some of the limitations I first see as limiting learning over time. 

Making choices on learning

To learn about the requirement, I would like to test things that would teach me around the concepts of what scope the limit pertains to (one user / one account / one account type / one ATM) and what assumptions are built into the idea of having a daily limit with a secondary limit through balance. 

For all we know, the way the requirement is written, it could be ATM specific withdrawal limit! 

Hands on with the application, learning, would show us what frames its behavior fits in, and without time to test first things out, I would just want to walk away at this point. 


Wednesday, June 30, 2021

Social Programming Approaches

A year ago, I was preparing for a keynote coming up in autumn 2020 where I had promised to talk about social software testing approaches. Organizing my experiences, I wrote an article for bbst.courses  and started the work of replacing mob and mobbing with ensemble and ensembling. 

I described four prominent social software testing approaches, two for groups and two for pairs. 

  • Ensemble testing 
  • Bug bash
  • Traditional pair testing
  • Strong-style pair testing
Today, I started thinking through the programming equivalents of these, and how I have come to make sense in the concepts. I have long ago accepted that everyone will use words as they please, and more words in explaining how we think around the words is usually better. Today I wanted to think through:
  • Traditional and Strong-Style Pair Programming
  • Ensemble programming
  • Swarming
  • Code retreats
  • Coding dojos
  • Hackathons and Codefest
  • Bug Bust

Bug Bust is a new type of event that resembles Bug Bash on testing side, focused on cleaning up easily identifiable code bugs. I have not seen anyone run one yet, but I noticed AWS is setting this up a thing to do, and time will tell if that turns out to be popular. 

Hackathon is the closest programming equivalent to bug bashes on the testing side. Hackathons come to programming with the idea of creating a program in a limited timeframe, usually in a competitive setting. A hackathon comes with a price, seeks survival of the fittest solution, and seeks to impress. I generally hate it for its competitive nature. It says there is a team but how the team works is open. The competition is between teams. Similar time-boxed things without the competitiveness have been dubbed as "20% time" and codefest [Hilkka Huotari writes about this here in Finnish], still seeking this idea of bringing together small teams on something uncertain yet relevant but a bit off the usual path of what we're working on for limited amount of time for emergent teams

Coding dojos are practice-oriented group sessions. Having experienced some, I have come to think of them as pair working and group watching, with rotation. Truth be told, coding dojos are a predecessor to ensemble programming.  Coding dojos usually happened in combination with code katas, rehearsal problems we would do in this format. 

Code retreats are also practice-oriented group sessions, that are organized around repeating solving same problem in code under various constraints. The usual problem I have experiences some tens of times now is the game of life, and there's whole books on constraints we could try and introduce. A constraint could be something like taking so small steps that tests pass at 2 minutes and gets checked in, or the solution needs to seek a simpler, smaller step. The work is done in pairs, pairs switched between sessions, and the learning experience framed with regular retrospectives to cross-pollinate ideas and experiences. 

Swarming is a team work method that I differentiate from ensemble programming with the idea that it brings together subgroups rather than teams, and is inherently temporary in nature. The driving force for swarming is a problem that needs attention from multiple people. I have come to swarming from context of kanban boards and work-in-progress limits needing special attention to one of the activities we model, and swarming is a way of ensuring we can get the work moving from where it has been piling up. 

Ensemble programming is about the entire team working as a team together on one task at a time. To work on the same task, it usually means sharing one keyboard. A new group ensembling looks an awful lot like a coding dojo, with guardrail rules to enforce a communication baseline, except we may also be working on production code related tasks, not just rehearsal problems. A seasoned group communicates on an entirely different level, and the dynamic looks different. 

Strong-style pairing is ensemble programming as a pair. It asks for the same rule as the ensemble has where ideas and decisions come from the person off the keyboard

Traditional pairing is the pairing where one on the keyboard is doing programming and the other off keyboard is reviewing and contributing

We have plenty of options for social programming available to us. Social programming is worthwhile because we don't know what we don't know, but sharing the context of doing something together brings us those serendipitous learnings, in the context of doing. How much and what kind of social programming you sprinkle into your teams is up to you. 






Tuesday, June 1, 2021

Scaling by the Bubble

At work, there are many change programs ongoing, to an extent that it makes me feel overwhelmed. 

We have a change program I walked away from that tries to change half the organization by injecting a process with Jira. I walked away as I just felt so disconnected with the goals and decided that sleeping better and being true to my values would always win over trying to fix things in some kind of scale. Me walking away gives those who continue a better chance of completing and we can come back to reflect the impacts on some appropriate scale.

We have a change program I just volunteered with seeking benefits of platforms and product lines, and I still believe it could be a nice forum of likeminded people to figure out alignment. 

And we have a change program where we audit and assess teams for their implementations of whatever the process is, giving me a chance to also consider relationship of process and results. I volunteer with that one too.

But in general, I have come to understand that I make major changes in organizations from a very different style than what we usually see. And as I just listened to Woody Zuill mention the same style giving it a name 'the Bubble', I felt like I need to write about scaling by the Bubble.

The basic idea as I see it with 'the Bubble' instead of starting where it is hard - in scale - we start where it is possible. Injecting someone like me into a team that needs to change towards continuous delivery, modern agile and impact/value oriented way of working with streams of value that can be reused over products is an intervention introducing a start of the Bubble. 

My bubble may start with "system testing", but it soon grows to "releases", "customer experience", "requirements", "conversations" and through "technical excellence" to "developer productivity". Instead of planning the structure we seek, we discover the structure by making small shaping changes continuously. We protect the change by 'the Bubble', creating interfaces that simplify and focus things in the bubble. And we grow the bubble by sharing results that are real, recent and local to the organization. 

Having been around in organizations, I see too much top-down improvements (failing) and not enough bubble-based improvements. 

My bubble now is trying to change, over the next two years, culture and expectations in scale. Every day, every small improvement, makes tomorrow a little better than today. Continuous streams make up great changes. 


Saturday, May 29, 2021

Scale - Teaching Exploratory Testing

At work, I hold three roles within a position of a principal test engineer: a tester, a test project facilitator and a test competence facilitator. I have 37.5 hrs a week to  invest in all those three, and while my productivity has soared compared to early days of my career, days feel just as short now as they ever did. 

Some of my hobbies resemble my work, but they reach outside the organization: speaking, writing, organizing, teaching, reading. I deliver an occasional paid training and paid keynote. I write my book. I try to record my podcast with limited success. I structure my thoughts into talks and articles.

At this point of my life and career, I chose my theme of the year to be scale. Scale of impacts I induce at work I care for. Scale of teaching forward what I have learned. Enabling others in scale. 

With close to 450 sessions delivered for audiences that were not my work, more people know me than who I recognize. Things like someone telling me when we first met while I can name them only for the pairing we did in the last month are all too common. It's not that I forget people, it's that I have never come to remember everyone. 

As I reflect on my goal of scale, I come to an idea of what scale would look like for my teaching. It would look like me teaching a network of teachers who teach forward. I already took some steps towards this launching https://www.exploratorytestingacademy.com where all course material is Creative Commons Attribution, allowing you all to use it for your business, even to make a business of your own. 

I make time to teach free courses every now and then, like during the Exploratory Testing Week. I make time to teach commercial courses every now and then, as my side job, but my availability is limited as I love my day-job and the assignment that allows me to choose in transforming quality and testing. 

I need other people, willing to teach, that I would teach my exercises, my materials and adjust them to what they feel they want to take forward. I have a lot of theory and example material, as slides. Like with Exploratory Testing Foundations, I make them available at Exploratory Testing Academy. But I also have a lot of experiential exercises, where facilitating and framing the exercise is where value for learning is best created. 

Would you want to learn to teach experiential testing exercises? 

I envision a series of sessions where we would first experience the exercise as participants, but then turn the roles around first into looking at what facilitating such exercise means and then practicing facilitation while I support, watch and give feedback after. I haven't done this yet, so we could discover what works together. 

You could learn to teach different testing experiences with different applications. I use:

  • a Textbox
  • E-Primer
  • Weather App
  • Gilded Rose
  • Zippopotamus
  • Dark Function Editor
  • Freemind
  • Xmind
  • Conduit
  • ApprovalTests
  • RobotFramework
I also have exercises on understanding your tester personality, agile incremental test planning, test strategy, test retrospective for release and feature, business value and many many more I would be happy to pass on. 

Interested? Let me know. You might also tell what you'd like to start from, because all the exercises I have created since 2001 when I started teaching on the side would fill a few years to go through on the side of a regular job. For prioritizing my time, I ask you to consider my goal - scale. Could you help me with that? If your answer is yes, I'm going to trust you and dedicate some time to help you learn this. Send me an email: maaret(at)iki.fi.

In case it isn't already clear, I am not looking to invoice anyone to teach them. I will volunteer my time for free within constraints of what I can make available. I want more people in the world to experience experiential learning and for myself to make an impact. 

Friday, May 28, 2021

A New Experiential Exploratory Testing Exercise on focusing feedback to own application

Over the years of first me learning testing and then me teaching testing to strengthen what I have learned, I have come to appreciate an experiential style of learning. Instead of me lecturing how testing is like detective work where we are supposed to find all the information others around us are missing about quality, I prefer us doing testing to learn this.

For this purpose, I have created many starting points for us experiencing testing together. I find that the selection of the application we will test will guide the lessons we are able to pull out. Every single application has something in common, but also forced thinking that is different in some relevant way. Today I wanted to talk about one of these experiential exercises, to give you an example of a favorite. 


Then again, what makes a favorite? Some of the exercises I have been adding most recently get my attention. This week I added an exercise on targeting our testing on our application when there’s our part and a 3rd party API. 


I tested solo on Saturday. I paired with Irja Straus from Croatia on Monday. And I paired with Hamid Riad from Sweden on Wednesday. And as always with new targets, while I had a rough idea of what I would want to teach, hands-on exploratory testing taught me things that surprised me. 


When I tested solo, I set up the application. Like with most open source applications, the setup wasn’t straightforward, and before I could run it on my machine with a good conscience, I had to update many javascript dependencies to get to a comfortable state without critical security warnings. Similarly, I set up the mocks I needed for the exercise, and solved issues around CORS. CORS is a browser security function that is supposed to protect us a bit from bad implementations, apparently introduced since the application had been created. 


From what I thought would be a solo exploratory testing session, I soon discovered it would become a session to update documentation, fix dependencies and the project in general to get to running it. I finished my tests by playing barely enough to know the API had some intelligence and was not just a REST frontend to a database. Interesting 3rd party API promises good for my exercise on focus for the exploratory tester *away* from the interesting API, into our application. 


When I paired with Irja, we got to the focus on our application very easily. From a simple demo, we got to using the mocks to test and really left the API alone for most of the session except for some comparison results and a deep focus into the promises of functionality the API specification had to offer. We learned about value dependencies, what worked and what didn’t. We ended up reading code to know what ‘works as implemented’ means for this application - including calling the API with a typo that reimplements logic that API would encapsulate in the application. Starting from not knowing anything about the application, I learned about the data we would need to take control over as we explored the API forcing our front-end to show us functionality that matched data. We would find domain concepts that weren’t implemented, problems with what was implemented and report our discoveries. 


When I paired with Hamid, we first fell into the lure of the interesting API, playing with values from the user interface and testing the 3rd party API more than our own application. Even though I first thought that was not what I wanted out of this exploratory testing exercise, I soon learned it was exactly what I wanted. Each of us in the pair building on the ideas of the other, we learned about how the API deals with mistaken inputs and what input validation our application should have. The time we spent on the 3rd party API paid back when then turning our focus on our application. While the previous session gave me the experience on the API specification and set my expectations on that, this session found interesting but different problems. We also fixed some bugs while at it. 


I’m planning to test the exercise as I now envision it at Socrates UK Digital open space conference next week. This time I would run it as an ensemble testing exercise, and see where the group goes with the constraint and facilitation, and like in the end of every experiential session, pull out the lessons the group can share. 


I have other exercises that teach other things. I used two full training days facilitating people through various exercises this week as I was teaching my exploratory testing work course. I’m a big fan of experiential learning, and that in general we learn testing by doing testing. We can inspire and motivate, make sense and clarify models with the material around the exercises. Yet, discovering your lessons - my intended core lessons and some surprises - is immensely valuable. 


It’s the kind of teaching I want to figure out how to scale. I can scale talks of theory and experiences. But I am yet to find out how to scale experiential teaching of the lessons I have created.

Sunday, May 16, 2021

Testers Need to Read Code

Moving from organization to organization, and team to team, I find that there is always a communication gap between testers and developers. The gap does not mean lack of respect, or not getting along. It means that there is information one party holds the other could use but don't get. 

Year after year, I have watched organizations advice *developers* to write guidance to testers into Jira tickets, to link documentation and provide demos as part of the tiny handoffs. I watched the community of testers advice *testers* to ask for that documentation and those demos, and ask questions. 

Yet, I find myself looking at a gaping hole in a lot of teams and organizations around the ideas of the role of *code* in these handoffs. 

Cutting through the chase: I believe all testers need to start to read code

And when I say all testers, I don't say this lightly. I have worked extensively with business testers doing acceptance testing from the point of view of domain expertise. I have worked extensively with career testers working side by side with developers. I have worked extensively with testers specializing in creation of test systems. And I've heard this from them all: time is limited, and we need to make choices where we spend our time. 

First, let's talk about code: reading and writing of it. 

Learning Programming Through Osmosis -talk, 2016

We keep forgetting that none of the programmers we have in teams know everything of programming. They usually know enough to make progress through googling things they need to complete task at hand. And they know more tomorrow than they did today - every task we complete grows us a little more. 

Thus when I call for *all testers* to *read code*, I don't assume fluency from day 1. I don't assume we all see the same things. But starting on that journey of reading creates skills we need in doing the tester work better. 

Think of it as if you were peeling layers:

  • Start with CHANGE in mind - change matters for prioritizing the testing you do
    • Read commit messages and pay attention to the component that was changed. 
    • Read contents for size to start building your own idea of relationship of size, content and impact to testing
    • Read names of the things that change, instead of the details that change, to target your testing to concepts
    • Ask some questions about what you see is changing
    • Combine this new source of information with your hands on experience testing the versions with the changes and build a model that supports your prioritization
  • Deepen with IMPORTANCE in mind - know which parts of implementation matter
    • Read folder structures to understand the models built in code
    • Read to notice frequent change over stable function
    • Read to notice many contributors over one
Reading code does not mean reviewing code. You may eventually want to review code and leave your "approved" or "needs work" notes, but it is by no means a starting requirement. 

The starting requirements is that instead of trusting secondary agreed sources for the information about what you have at hand for testing, using code as a source of information is essential. Mostly things don't change without the code changing *somewhere in the ecosystem*, and seeing the change helps you target testing.

It is not sufficient to test by discovering all the promises the software fulfills and mechanically going through them again and again, inspired by what the jira ticket requests as a change. Reading code gives you an added layer. 

It's a three layer prioritization problem:
  • understand width of what it does for all types of stakeholders
  • understand the business to narrow it down to important 
  • understand the tech to localize the search
Like with learning to read and write, we start with reading first. Some of us never learn to write that best-selling book. 

What I ask for is not a big thing, you could try 15 minutes a day - starting with asking where to go to find this information. 



Friday, May 7, 2021

Let down by the Realworld app

This week, I finally got to making space in my calendar to pair up with Hamid Riaz on start of creating a new exploratory testing course centered around yet another test target: the Realworld App. Drawn in by the promise of "mother of all demo apps" and the lovely appearance of a programmer community who had contributed examples for the demo app frontends and backends in so many sets of technology, I had high expectations. I wanted to create a course that would teach the parts of exploratory testing that are hard and we only get to in good agile teams that have their quality game in shape

Instead, I was reminded again that demo apps without the active agile team, no matter how good the original developer may have been, never get to the point of having their quality game in shape. 

The first example implementation I picked up from the list was a end-to-end testing demo setup, only to learn it had been years since it was last updated (sigh), and the most basic of the instructions on how to get it running relied on half-docker (good), half local OS, and local OS was expected to be anything but what I had - Windows. While I have worked my way through of setting all too many projects that expect mac / linux to build on my work Windows, I did not feel up for the task today. So next option. 

Next I picked up a nodejs implementation, no dockerizing involved but I could add that to make it more isolated for anyone testing after me on the course. At this point Hamid joined me. 

Without too much effort, we got through installing Mongo, and all the dependencies needed. We installed postman, imported the tests and eventually also the environment variables provided, and run the tests only to note that some of the functionality that was supposed to be there, was no longer there - the years between the last changes and the latest of MongoDB seemed to do the trick of making the application fail, and we were not told of the version the code expected. 

After the pairing, I summed up on twitter: 

I should have known then. Software that is not maintained is not worthwhile target for realworld testing. 

When office work the next day added to inspiration of forgetting that results of testing don't stay valid for long even when you don't change anything let alone when you do, I concluded: 

My search for a worthwhile test target to teach testing that does not just succeed for the mere sloppiness of the org that created them still continues. 

 

Thursday, May 6, 2021

Pink Tax on Access to Agile Heroes

It is a year of celebration, 20 years since the Agile Manifesto. We see people coming together to discuss and present time and events leading up to it, reflect on the time and events after it, and aspire for futures that allow for better than we ever had. 

Today, one of those events popped on my timeline in Twitter. 

I was carefully excited on the idea to hear from some of the early agile heroes who were around but not at Snowbird. Until I clicked on the link to realize that access to my heroes, so rarely available, is a paid event and I have heard the perspective others in Snowbird amplified in large scale free online events almost a little too much this year. 

I have two problems with this on *Agile Alliance* in particular. First of all, by paywalling this early group they limit the public's access to this group - and they were already limited by not being in Snowbird. 

Second, asking this money from people like myself who really want to hear from my agile heroes is a form of pink tax. The pink tax refers to the broad tendency for products marketed specifically toward women to be more expensive than those marketed for men, despite either gender's choice.
You know, that idea how things that are for women make good business by being more expensive. Because women are not a minority, we are half of the world, and we want things that are not made for the default of people: men. And I do deeply crave to hear that people like me, my heroes, were around when I was around, even if they are missing from the visible history. 

Being a woman, you get to pay more for the physical items with the excuse of production costs. You get to pay more for the virtual items in games

Agile Alliance could do better on promoting access to early heroes that did not make it to Snowbird. 


Please note: I don't say that the people speaking are *women*. I say I am a woman. I did not check how the people on that list identify. I know only some of them. 

Thursday, April 29, 2021

I don't trust your testing and how to change that

Over the years, I have been senior to a lot of testers. I've been in the position to refuse to give them work. I have been in the position to select what their primary focus in teams becomes. I've been in position to influence the fact they no longer work with the same thing. 

Being in position of this kind of influence does not mean you have to be their manager. It means you need to have the ear of the managers. So from that position, inspired by the conversations we had with Exploratory Testing Academy Ask Anyone Anything session, I am going to give you advice on how to work to be appreciated by someone like me. (note: it means NOTHING really, but a thought exercise)

Let's face it: not all testers are that great. We all like to think we are, yet we don't spend enough time in thinking about what makes us great. 

So lets assume I don't trust your testing and you want to change that - a tongue-in-cheek guide to how-to. 

1. Talk about your results

I know I can search your bugs in Jira and no matter how much folks like me explain that not everything is there, in reality a lot of it is there in organizations that believe that work does not exist without a card in jira. Don't make me go and search. 

Talk about your results:

  • the information you have found and shared with the developers ("bugs")
  • the information you have found and could teach others ("lesson learned", "problems solved")
  • the changes in others you see your questions are creating
  • the automation you wrote to test, the automation  you rewrote to test, the automation you left behind for regression purposes
  • the coverage you created without finding problems
Small streams, continuously, consistently. 

2. Create documentation in a light-weight manner

You can lose my trust with both too much and too little. Create enough, and especially, create the right kind. Some of it is for you, some of it is for your team. 

If you like tracking your work with detailed test cases, I don't mind. But if creating test cases does not intertwine with testing, and if it keeps your hands off the application more than hands on it, I will grow suspicious. Results matter - make your results tell the story of what you create for you makes sense for you. 

If others like using your detailed test cases, I will also make you responsible for their results. I have watched good testers go bad with other people's test cases. So if you need detail and make detail available for others, I expect you to care that their results don't suffer. Most likely I'd be looking out for how you promote your documentation for others and be inclined towards suspicion. 

If you don't create documentation, it is just as bad. The scout rule says you need to leave things easier for those who come after you and I expect you to provide enough for that. 

3. Don't get caught being forgetful

A good tester needs to track a lot of different levels information and tasks. If you keep forgetting things, but you don't have your own system of recalling them, this raises a concern. 

I don't want to be the one who reminds you on you doing your work. Once is fine, repeatedly reminding and you should recognize you have something to work on. Like prioritizing your work to manageable amounts. Or offloading things into notes that help *you*. You are responsible for your system, and if you are not, trust will soon fade away. 

If you create a list of things to test, test them. If you don't test them, tell about it. And never ever tell me that you did not know how to, did not ask for help and let bugs slip just because you didn't work your way through a rough patch. 

4. Do your own QA of testing

After you tested, follow through what others find. Don't beat yourself up, but show you make an effort to learn and reflect. Have someone else test with you or after you. Watch later what the customers will report. Make an effort to know.

5. Co-create with me or pretty much anyone

Like most people, I have self-inflated sense of self and I like the work I have done. Ask me to do the work with you, co-create idea lists, plans, priorities, whatnot and by inserting a bit of me into what really is you most likely makes me like what you do a lot. 

Pro tip. This works on actual managers too. And probably on people who are not me. You also might gain something by tapping into people around you, and just seeing you use others makes you appear active, because you are! 

6. Mind the cost

Your days at work cost money. Make sure what you cost and what your results are are balanced. Manage the cost - you make choices on how much of time you give on a particular thing. Be thoughtful and focus on risks. 

Time on something isn't just a direct cost of the time. It is also time away from something. In the time I spent writing this article, I could have:
  • Had a fruitless argument over terminology with someone on the internet
  • Set up a weather API and tried mocking it
  • Read the BDD books - Formulation that I have been planning on getting at all week
  • Played a fun game of Don't Starve Together
  • ....
We make choices of excluding things from our lives all the time. And I will have my perspective on *your* choices of cost vs. results. 


All in all, I think we are all on a learning journey, and not alone. Make it show and allow yourself to improve and grow. 



Sunday, April 25, 2021

The English Native Speakers Are Missing Out

I read on interview this week. Already a few years old interview focused on asking Mervi Hyvönen, then about to retire with Kela, the Finnish Social Insurance Institution, about her career. 

Mervi started working with testing in 1973.  The first software she tested was using punch cards as input mechanism for the program and the data. Before retiring, she tested software in the same organization on five different decades.

The really interesting piece in the article was her description of their approach to testing: Kela has been long known to be holding space for great exploratory testing. She describes they were doing exploratory testing before the first messages from across the seas reached Finland and gave it a new name. 

Those of us aware of history of exploratory testing, may remember the term was coined by Cem Kaner in 1984, and mentioned in his Testing Computer Software 1st edition in 1988. He did not invent it, he coined it - he saw others in addition to his organizations doing it, and he gave it a name. It got down to testing history as the way product companies in Silicon Valley tested. It was also the way the pioneering companies in Finland tested. And I am pretty sure Finland is not the only corner of world erased from the history in this case.

When conferences seek great speakers for conferences, non-native English speakers are often secondary. 

When we read research, the shared research language is English. And we know that a lot of scientific work of the past was written in German, and that the Russian scientific community still publishes heavily in their native language. 

When I chose to start speaking and later blogging, I started first with Finnish. Finns learn better when the material is in their own language. But the two languages authorship was taking a toll on my contents, and I later shifted. 

When Agile Manifesto came about, I had already a few years in researching "lightweight software development methods". 

When Exploratory Testing as term started show up in Finland, I had already been doing it. 

The appearance of information in the world is very English language centric, and it creates a significant bias. 

I don't want Mervi, the pioneer of Exploratory Testing in Finland, be forgotten. Her organization grew to hundreds to testers, and those testers took testing as they knew it forward to other organizations. Local companies creating professionals who moved around made testing here what it is today, not reading a few articles in English but applying the in Finnish. 

You all just don't know about it as long as the only language you read is English. As long as the only conversations you have are in English. And even when we don't know something exists, it very well might. 




Friday, April 23, 2021

Learning without Impact on Results

As I talk about exploratory testing, I find myself intertwining three words: design, execution, learning. Recently I have come to realize that I am often missing a third one: results. I take results for granted.

When I frame testing, I often talk about the idea that learning in a way that would give us a 1% improvement would mean we are able to do what we did before by investing 4 minutes less on the same results. There is no learning without impact on results. 

Except there is. I recently talked to a fellow tester who asked me a question that revealed more about my thoughts that I had realized:

Aren't we all learning all the time?

This made me think of a past colleague who told me that the reason I find it hard to explain what exploratory testing is that the way I learn as an exploratory tester is learning in a different dimension than our usual words entail. 

So let's try an example - watching a tester at work. 

They were testing a feature for many weeks. The task in Jira was open, collecting the little dots tasks collect when they don't move forward in the process. The daily reports included mentions of obstacles, testing, preparing for testing, figuring things out. The feature wasn't a little one, but had relevant work ongoing on developer side showing up as messages on small conversation channel asking about unblocking dependent tasks, and figuring out troubles. The tester was on the channel, but never visible. 

Finally, the work came to conclusion and the task was still open for finalizing test automation. Finally a pull request emerged. 20 lines of code. Produced by producing 2000 lines of code, where it was actually 200 lines of code produced 10 times. Distilled learning, tightly packaged. 

Reflecting on the investment, the company got three results for quite a number of hours of work: 

  • One fixed bug
  • One regression test
  • A tested feature
  • A tester with 'learning'
Yet, I would find myself dissatisfied with the result. The work had the appearance of design, execution and learning intertwined, used automation to explore and to document and could be a model example of what I mean by contemporary exploratory testing. What I missed out on was the balance of learning making the tester better, and learning making the product and organization better. 

The visible external results were one fixed bug and one regression test, in an environment that appears to force communication visible. No tester-initiated early discussions leading to aha-effects improving the product before it was coded. Limited reports on suspicious behaviors. Suspicious coverage of a tested feature no one could analyze or add to. And eventually, late discoveries of issues in the area by people coming after, making a point of remaining alone being a wrong choice.

Sometimes when we think of 'learning', we overemphasize knowledge acquisition not actionable results of the learning. It is like our work as testers is to know it all, but do nothing else with it than know it. We collect information in ourselves, and lock it up. And that is not sufficient. 

Instead the actionable results matter. The learning in exploratory testing is supposed to improve the information and our colleagues who also learn through our distilled learning. 


Saturday, April 17, 2021

Start with the Requirements

There's a whole lot of personal frustration on watching a group of testers take requirements, take designs of how the software is to be built, design test cases, automate some of them, execute others manually, take 10x more time than I think they should AND leak relevant problems to next stages in a way that makes you question if they were ever there in the process. 

Your test cases don't give me comfort when your results say that the work that really matters - information on quality of the product - is lacking. Your results include the neatly written steps that describe exactly what you did so that someone else with basic information about the product can take them up next time for purposes of regression. Yet that does not comfort me. And since existence of those tests makes next rounds of testing even less effective for results, you are only passing forward things that make us worse, not better.

Over the years, I have joined organizations like this. I've cried for the lack of understanding in the organization on what good testing looks like, and wondered if I will be able to rescue the lovely colleagues doing bad work just because they think that is what is asked of them.

Exploratory testing is not the technique that you apply on top of this to fix it by doing "sessions". It is the mindset you intertwine in to turn the mechanistic requirements to designs to test cases to executions manual and automated into a process of thinking, learning and RESULTS. 

Sometimes the organization I land in is overstructured and underperforming. If the company is successful, usually it is only testing that is over structured and underperforming, and we just don't notice bad testing when great development exists. 

Sometimes the organization I land in has really bad quality and testing is "important" as patching it. Exploratory testing may be a cheaper way of patching. 

Sometimes the organization I land in has excellent quality and exploratory testing is what it should be - finding things that require thinking, connections and insight. 

It still comes down to costs and results, not labels and right use of words. And it always starts with requirements. 



Friday, April 16, 2021

The dynamics of quadrants models

If you sat through a set of business management classes, chances are you've heard your teacher joke about quadrants being *the* model. I didn't get the joke back then, but it is kind of hilarious now how the humankind puts everything in two dimensions and works on everything based on that. 

The quadrants popularized for Gartner in market research are famous enough to hold the title "magic" and have their own Wikipedia page. 

Some of my favorite ways to talk about management like Kim Scott's Radical Candor are founded on a quadrants model. 

The field of testing - Agile Testing in particular - has created it's own popular quadrant model, the Agile Testing Quadrants.  

Here's the thing: I believe agile testing quadrants is a particularly bad model. It's created by great people but it gives me grief:

  • It does not fit into either of the two canonical quadrant model types of "move up and right" or "balance". 
  • It misplaces exploratory testing in a significant way
Actually, it is that simple. 

The Canonical Quadrant Models

It's like DIY of quadrants. All you need is two dimensions that you want to illustrate. For each dimension, you need some descriptive labels for opposite ends like hot - cold and technology - business. See what I did there? I just placed technology and business into opposing ends in a world where they are merging for those who are successful. Selecting the dimension is a work of art - how can we drive a division into two that would be helpful, and for quadrants, in two dimensions? 


The rest is history. Now we can categorize everything into a neat box. Which takes us to our second level of canonical quadrant models - what they communicate

You get to choose between two things your model communicates. It's either Move Up and Right  or Balance.

Move Up and Right is the canonical model of Magic Quadrants, as well as Kim Scott's Radical Candor.  Who would not want to be high on Completeness of Vision and Ability to Execute when it comes to positioning your product in a market, or move towards Challenging Directly while Caring Personally to apply Radical Candor. The Move Up and Right is the canonical format that sets a path on two most important dimensions. 

Move Up and Right says you want to be in Q3. It communicates that you move forward and you move up - a proper aspirational message. 

The second canonical model for quadrants is Balance. This format communicates a balanced classification. For quadrants, each area is of same size and importance. Forgetting one, or focusing too much on another would be BAD(tm).    


Each area would have things that are different in two dimensions of choice. But even when they are different, the Balance is what matters. 

Fixing Agile Testing Quadrants

We discussed earlier that I have two problems with agile testing quadrants. It isn't a real model of balance and it misrepresents exploratory testing. What would fixing it look like then?

For support of your imagination, I made the corrections on top of the model itself. 

First, the corner clouds must go. Calling Q1 things like unit tests automated when they are handcrafted pieces of art is an abomination. They document the developer's intent, and there is no way a computer can pull out the developer's intent. Calling Q3 things manual is equally off in a world where we look at what exactly our users are doing with automated data collection in production. Calling Q4 things tools is equally off, as it's automated performance benchmarking and security monitoring are everyday activities. That leaves the Q2 that was muddled with a mix already. Let's just drop the clouds and get our feet on the ground. 

Second, let's place exploratory testing where it always has been. Either it's not in the picture (like in most organizations calling themselves Agile these days) or if it is in the picture, it is right in the middle of it all. It's an approach that drives how we design and execute tests in a learning loop. 

That still leaves the problem of Balance which comes down to choosing the dimensions. Do these dimensions create a balance to guide our strategies? I would suggest not. 

I leave the better dimensions as a challenge for you, dear reader. What two dimensions would bring in "magic" to transforming your organizations testing efforts?  

Monday, April 5, 2021

Learning from Failures

As I opened Twitter this morning and saw announcement of a new conference called FailQonf, I started realizing I have strong feelings about failure. 

Earlier conversations with past colleague could have tipped this off as he is pointing out big success comes our of many small successes, not failures and I keep insisting on learning requiring both success and failure. 

Regardless, failure is fascinating.

When we play a win-lose -game, one must lose for the other to win. 

A failure for one is a small bump in the road for others.

In telling failure stories, we often imagine we had a choice of doing things differently when we don't. 

A major failure for some is a small consequence for others. 

Power dynamics play a role in failure and its future impacts, perhaps even more than success. 

Presenting a failure in a useful way is difficult. 

Missing an obvious bug. Taking the wrong risk. Trusting the wrong people. 

Failures and successes are a concept we can only talk about in hindsight, and I have a feeling it isn't helping us looking forward. 

We need to talk about experiments and learning, not success and failure. 

And since I always fail to say what I should be saying, let's add a quote:


 

Saturday, April 3, 2021

Exploratory Testing the Verb and the Noun

 I find myself repeating a conversation that starts with one phrase:

"All testing is exploratory" 

With all the practice, I have become better at explaining how I make sense in that claim and how it can be true and false for me at the same time. 

If testing is a verb - 'Exploratory testing the Verb' - all testing is exploratory. In doing testing in the moment, we are learning. If we had a test case with exact steps, we would probably still look around to see a thing that wasn't mentioned explicitly. 

If testing is a noun - 'Exploratory testing the Noun' - not all testing is exploratory. Organizations do a great job at removing agency with roles and responsibilities, expectations, and processes. The end result is that we get less value for our time invested. 

I realized this way of thinking was helping me make sense on things from a conversation with Curtis, known as the CowboyTester on twitter. Curtis is wonderful and brilliant, and I feel privileged to have met him in person in a conference somewhere in the USA. 

Exploratory Testing the Noun matters a lot these days. For you to understand exactly how it matters, we need to go back to discussing the roots of exploratory testing. 

Back in the days of first mentions of exploratory testing by Cem Kaner in 1984, the separation from all things testing came from the observation that some companies heavily separated, by their process, the design and execution of tests. Exploratory testing was coined to describe a skilled, multidisciplinary style of testing where design and execution are intertwined, or like they said in the early days "simultaneous". The way testing was intertwined on the rehearsed practitioners of exploratory testing made it appear simultaneous, as it is typical to watch someone doing exploratory testing and work at same time on things in the moment and long term, details and big picture, and design and execution. The emphasis was on agency - the doer of activity being in control of the pace to enable learning. 

Exploratory testing was the Noun, not the Verb. It was the framing of testing so that agency leading to learning and more impactful results was in place. 

For me, exploratory testing is about organizing the work of testing so that agency remains, and encouraging learning that changes the results. 

When we do Specification by Example (or Behavior Driven Development that seems to be winning out in phrase popularity), I see that we do that most often in a way I don't consider exploratory testing. We stick to our agreements of examples and focus on execution (implementation) over enabling the link between design and execution where every execution changes the designs. We give up on the relentlessness on learning, and live within the box. And we do that by accident, not by intention. 

When we split work in backlog refinement sessions, we set up our examples and tasks. And the tasks often separate design and execution because it looks better to an untrained eye. But to close those tasks then to the popular notion of continuous stream, we create the separation of design and execution that removes exploratory testing. 

When we ask test cases for compliance, to be linked to the requirements, we create the separation of design and execution that removes exploratory testing. 

When supporting a new tester, we don't support the growth of their agency by pairing with them, ensuring they have design and execution responsibility, we hand them tasks that someone else designed for them. 

Exploratory testing the noun creates various degrees of impactful results. My call is for turning up the degrees of impactful, and it requires us to recognize the ways of setting up work for the agency we need in learning .




Friday, April 2, 2021

Learning for more impactful testing

I wrote a description of exploratory testing and showed it to a new colleague for feedback. In addition to fixing my English grammar (appreciated!), she pointed out that when I wrote on learning while testing, I did not emphasize enough that the learning is really supposed to change the results to be more impactful.

We all learn, all the time, with pretty much all we do. I have seen so many people take a go at exploratory testing the very same application, and what my colleague pointed out really resonated: it's not just learning, it's learning that changes how you test.

Combining many observations into one, I get to watch learning that does not change how you test. They look at the application, and when asked for ideas on what they would do to test it, the baseline is essentially the same and question after each test on what they learned produces reports of observations, not actions on those observations. "I learned that works", "I learned that does not work". "I learned I can do that", "I learned I should not do that". These pieces include a seed that needs to grow significantly before the learning of previous test shows up in the next tests. 

It's not that we do design and execution simultaneously, but that we weave them together into something unique to the tester doing the testing. The tester sets the pace, and learning years and years speeds up the pace so that it appears as if we are thinking on multiple dimensions all at the same time.  

The testing I did a year ago still helps me with the testing I do today. I build patterns over long times, over various applications, and over multiple organizations offering a space in which my work happens. I learn to be more impactful already during the very first tests, but continue growing from the intellectual push testing gives. 

We don't have the answer key to the results we should provide. We need to generate our answer keys, and our skills to assess completeness of those answer keys we create in the moment. Test by test. Interaction by interaction. Release by release. Tuning in, listening, paying attention. Always weaving the rubric that helps us do better, one test at a time. 

The Conflated Exploratory Testing

Last night I spoke at TSQA meetup and received ample compensation as inspirations people in that session enabled through conversations. Showing up for a talk can feel like I'm broadcasting, and that gives me the chance of sorting out my thoughts on a different topic every single time, but when people show up for conversation after, that leaves me buzzing. 

Multiple conversations we ended up having were on the theme of how conflated exploratory testing is, and how we so easily end up with conversations that lead nowhere when we try to figure it out. The honest response from me is that it has meant different things in different communities, and it must be confusing for people who haven't traversed through the stages of how we talk about it. 

So, with half serious tongue in cheek, I set out to put the stages of it on this note. Thinking of good old divisive yet formative "schools of testing" work, I'm pretty sure we can find schools of exploratory testing. What I would hope to find though is group of people that would join in describing the reasons why things are the way they are with admiration and appreciation of others, instead of ending up with the one school to rule them all with everything positive attached to it. 

Here's stages, each still existing in the world I think I am seeing:

  • Contemporary Exploratory Testing
  • Agile Technique Exploratory Testing
  • Deprecated Exploratory Testing
  • Session-based Exploratory Testing
  • ISTQB Technique Exploratory Testing
  • The Product Company Exploratory Testing

As I am writing this post, I realize I want to sort this thinking out better and I start working on slides of comparison. So with this one, I will leave it as an early listing, making a note of the inspirations yesterday:

  • The recruiting story, where people show up telling they schedule exploratory testing session in the end, only to find them cancelled with other activities taking over. The low-priority task of session framing, unfortunately common with the Agile Technique Exploratory Testing. 
  • The middle era domination by managing with sessions story, where session based test management hijacked the concept becoming the defining criteria for exploratory testing to survive in organizational frames not founded on trust. 
  • The common course providers forcing it into a technique story, where people learned to splash some on top to fix humanity instead of seeking power from it.
  • The unilateral deprecation story, where terrible twins marketing gimmicks shifted conversations to testing in a particular namespace to create a possibility of coherent terms in a bubble. 
I believe we are far from done on understanding how to talk about exploratory testing amongst doers, towards enablers like managers or how to create organizational frames that enable the style of learning.  



Wednesday, March 24, 2021

Reality Check: What is Possible in Tester Recruitment?

I delivered a talk at Breakpoint 2021 on Contemporary Exploratory Testing. The industry at large still keeps pinning manual testing against automated testing as if they are our options, when we should reframe all work to exploratory testing in which manual work creates the automation to do the testing work we find relevant to do. 

You can't explore well without automating. You can't automate well without exploring. 

Intertwining - in level of minutes passing on as we explore - discovery and documenting with automation, we enable discovering with a wider net using that automation. 

A little later in the same day, Erika Chestnut delivered her session on Manual Testing Is Not Dead... Just the Definition. A brilliant, high-energy message talking about a Director of QA socializing the value of testing centering the non-automated parts to make space for doing all the important work that needs doing on making testing an equal perspective at all tables for software and services around the software - a systems perspective. Erika was talking about my work and hitting all the right notes except for one: why would doing this require to me manual or like Erika reframed it: "humanual"? 

When I talk about contemporary exploratory testing including automation as way of documenting, extending, alerting and focusing us on particular types of details, I frame it in a "humanual" way. Great automation requires thinking humans and not all code for testing purposes needs to end up "automated" running in a CI pipeline. 


The little difference in our framing is in saying what types of people we are about to recruit to fulfill our visions of what testing ends up as in our organizations, as leaders of the craft. My framing asks for people who want to learn programming and collaborate tightly also on code in addition to doing all the things people find to do when they can't program. Erika's framing makes it clear there will be roles where programming isn't a part of the required skills we will recruit for. 

This leads me to a reality check.

I have already trained great contemporary exploratory testers, who center the "humanual" thinking but are capable of contributing on code and finding a balance between all necessary activities we need to get done in teams. 

I have already failed to recruit experienced testers who would be great contemporary exploratory testers, generally for the reason that they come with either "humanual" or "automated" bias, designing testing to fit their skillsets over the entire teams' skillset.

If you struggle with automation, it hogs all your time. If you struggle with ideas that generate information without automation, automation is a tempting escape. But if you can do both and balance both, wouldn't that be what we want to build? But is that realistically possible or is the availability of testers really still driving us all to fill both boxes separately? 

Or maybe, just maybe, we have a lost generation of testers who are hard to train out of the habits they already have, even if I was able to make that move at around 20 years of experience. The core of my move was not on needing the time for the "humanual" part first, but it was getting over my fear of activating a new skill of programming that I worried would hog all my time - only to learn that the fear did not have to become reality. 





Saturday, March 13, 2021

Two Little Ideas for Facilitation of Online Ensembles

Today was a proud moment for me: I got to watch Irja Straus deliver a remote Ensemble Testing experience at Test Craft Camp 2021. We had paired on testing the application together, worked on facilitation ideas and while at all of that, developed what I want to think of as a solid friendship. 

The setup of Irja's Ensemble today was 12 participants, fitting into a 45 minute session. Only a subset of participant got to try their personal action as the main navigator and the driver, but there were a lot of voices making suggestions for the main navigator. The group did a great job under Irja's guidance, definitely delivering on the idea that a group of people working on the same task in the same (virtual) space, at the same time, on the same computer is a great way of testing well. 

Watching the session unfold, I learned two things I would add to my palette of facilitation ways.

1. Renaming the roles

Before the session, I had proposed to Irja to try the kid-focused terminology for ensembling. Instead of saying driver (the person on the keyboard), we would say 'hands'. Instead of saying navigator (the person making decisions), we would say 'brains'. 

Irja had adapted this further, emphasizing we had a 'major brain' and 'little brains', incorporating the whole ensemble into the work of navigation still explaining that the call in multiple choices on where the group went is on one person at a time. 

Watching the dynamic, I would now use three roles:

  • hands
  • brain
  • voice
You hear voices, but you don't do what voices say unless it is aligned in the brain. 

2. Popcorn role assignment

Before the session, I had proposed to Irja a very traditional rotation model where the brain becomes the hands and then rotates off to the voices in the ensemble.

Irja added an element of volunteering from the group into the roles. When the first people were volunteers, the session started with a nice energy and modeled volunteering for the rest of them. Time limitation ensured that same people did not repeat volunteering into same roles. One person first volunteering as hands and later volunteering as brains dropped the preplanned rotation off a bit, but also generated a new way of facilitating it.

Have people volunteer, with idea that we want to go through everyone as both brains and hands, but the order in which you play each part is up to the ensemble. Since physically moving people to the keyboard was not required, a popcorn assignment works well. 

 


Tuesday, February 23, 2021

End to End Testing as a Test Strategist

For the last 10 months, I've worked on a project that assigned me end to end testing responsibility. As I joined the project, I had little clue on what the work would end up being. I trusted that like all other work, things will sort out actively learning about it. 

Armed with the idea of job crafting ('make the job you have the job you want') I knew what I wanted: 

  • principal engineer position with space for hands-on work
  • making an impact, being responsible
  • working with people, not alone
With a uniquely open mandate, I figured out my role would be a mix of three things:
  1. tester within a team
  2. test manager within a project delivering a system
  3. test coach within the organization 
The learnings have been fascinating. 

My past experience of growing teams into being responsible for the system rather than their own components was helpful, and resulted in seeing both confusion and importance of confusion of one team member choosing scope of work bigger than the team. 

I had collected more answers to why than many people with longer history in the company, focusing on connections. The connections enabled me to prioritize the testing I did, and ask others for testing I wasn't going to do myself. 

The way I chose to do end to end testing differed from what my predecessor had chosen it to be for them, as I executed a vision of getting end-to-end tested, but in small pieces

I focused on enabling continuous delivery where I could, taking pride in the work where my team was able to release their part of the system 26 times in 2020. 

I focused on enabling layering of testing, having someone else test things that were already tested. It proved valuable often, and enabled an approach to testing where we are stronger together. The layered approach allowed me to experience true smoke testing (finding hardware problems in incremental system testing).

The layered approach also helped me understand where my presence was needed most for the success of the project, and move my tester presence from one team in the project to another half-way through. I believe this mobility made a difference in scale as I could help critical parts from within. 

I came to appreciate conversations as part of figuring out who would test things. I saw amazing exploratory testing emerge done by developers following hunches of critical areas of risks from those conversations. 

Instead of taking my end to end testing responsibility as a call to test, I took it as a call to facilitate. While I have used every single interface the system includes (and that is many!), every interface has someone who knows it deeper and has local executable documentation - test automation - that captures their concerns as well as many of mine. 

Looking back, it has been an effort to do end to end testing as a test strategist. You can't strategize about testing without testing. My whole approach to finding the layers is founded on the idea that I test alongside the others. Not just the whole team, but all of the teams, and in a system of size, that is some ground to cover. 


Sunday, February 14, 2021

Three routes to fixing problems test cases hide

I'm a big fan of exploratory testing, which often means I have reservations about test cases or at least ideas of how to interpret test cases in a way that does not require such an intensive investment into writing things down, that help us write things down that are needed and how to not think of test cases - or any documentation for that matter - to represent all the testing we are doing. 

Today I wanted to share three experiences from my career from three different organizations on how we tweaked our test cases to fix a problem all three organizations shared: using a lot of time for testing, but leaking significant bugs we believed we should be able to find. 

Organization 1: Business Acceptance Testing

The first example is from an organization where I managed business acceptance testing. I was working with two different projects, moving the business acceptance testing phase from months after months endeavor to something that would fit 30-days. One of my projects had a history of writing detailed test cases, the other had a history of not really using test cases. In getting the timeframe condensed, understanding what we had in mind to do and being able to reprioritize was essential. 

For both projects, we used Quality Center (HPs ALM solution was called that back then). Both projects started with test data in mind, and that is what we used as a starting point for our tests. We selected our test data to a set of criteria, wrote the criteria down on the test case title summarizing the business need for that particular data type. And as test steps, we used Quality Center's concept of test templates - a reusable set of steps that described the processes the two teams were running on a high level, same for every single test case. 

Thus our test cases were titles, with template test checklists to help us analyze and reprioritize our work. Same looking tests on first week could take a day, and later in the cycle, we could spend 15 minutes. The test case looked same, but we used it different, to explore. 

On one of the two projects, they had a history of writing test cases where steps also described the detail, and were concerned that giving those up may mean they forget to cover something as the information of changes isn't easy to pass around for a whole group doing acceptance testing. So we split out weeks into two first where we used the "old style" detailed tests and two latter where we used the new style. We found all problems during the latter two weeks, but in general, the software contractor had done a really great job with regards to their testing and the numbers of bugs we had to deal with were record low. 

Organization 2: Product testing with Reluctant Developers

The second example is from an organization I joined as their first and only test specialist. With their project manager's leadership, they had figured out writing test cases into word documents, one for each major area of the product. Tracking that the test cases were completed was central to the way they tested amongst the group of developers. Automation, on unit or system level, was not a thing yet for them. 

As I joined, the project manager wanted me to start creating test case documents like they had, improving them, and had ideas of how many test cases they would expect me to complete every day. 

Sampling one of the existing test specifications, it had 39 pages, 46 test cases, and 3 pieces of relevant information I could not figure out without reading the text based on commonly available knowledge. 

I made a deal with the project manager to write down structured notes while I tested, and we got to a place where I was trusted with testing, reluctant developers were trusted to test with me, and the test cases went away. Instead we used checklists of features to remind us what could be checked to design tests in the moment with regards to what the changes to the system were. 

Organization 3: Product testing with certification requirements

The third example is from an organization with a history of writing test cases that are traced back to requirements. Test cases are stepwise instructions. 

The change I introduced was to have two kinds of test cases: [Scenario] and [Feature]. Scenarios are what we use to test with and leave a lot of room for what exactly needs to be verified. Same test could be a week or an hour. For Scenarios, the steps are features as checklist - what features are part of that user journey. When we feel we need a reminder of how-to see a basic, sunny day scenario of a feature to remember what testing starts from, that is where Feature tests come in. The guideline was to write down only what wasn't obvious and keep instructions concise. There can be a feature test, without any steps at all. Steps are optional. 

Clearly, the test cases don't describe all the testing that takes place. But they describe seeing that what we promised that would be there, is there, and help us remember and pass on the information of how to see a feature in action. 

The Problems Test Cases Hide

Test cases can lead people into thinking that when they've done what they designed to do - the test cases - they are done testing. But testing does not work that way. The ways software can fail are versatile and surprising. And we care about results - information and bugs - over the documentation. 

Too much openness does not suite most of us. But too much prescription suites us even worse. And if prescription is something we want to invest in, automation is a great way of documenting in a prescribed manner.