Wednesday, September 29, 2021

The Six Things Facilitator Can Do To Improve Ensembling Session

Today as I shared on Ensemble Testing at EuroSTAR around lunch, one of the questions lead us to discuss specifically what the facilitator can do to make ensembling better. Watching people fail is the back row would make me squirm uncomfortably, so what can I do? 

These things have been very useful to me. 

1. Support Moving Forward with Questions

You see the driver on the keyboard not moving, and the navigator off keyboard unable to decide what to do. Ask questions! For example, you could ask the navigator "What would you like to do next?". A more general rule is to try to talk in questions as a facilitator. Think about what you want the ensemble to do so that they would work well and have a great experience, and frame your comment as a question that guides them towards that great experience. 

2. Call out a Thing They Do 

You see them doing something that they don't realize they do, but you realize because you've seen others do the same but struggle. Call out the pattern. For example, an ensemble making good notes about their testing with a feature - variable - data -structure modeling that you have learned to appreciate, name it in the moment. Giving something a label makes it something that is a little easier to retain. 

Don't overdo this during the session, you can also save examples to point out in the retro. Call it out if it helps in the moment to retain the thing and adds to vocabulary they use to successfully communicate. 

3. Step in to navigate

I use this one on teaching ensembles a lot, but also have found it useful on difficult work. For example, I might say: "Let's pause the rotation and I'll navigate a bit." This frees the current navigator who steps back to continue with timer as soon as I step out. 

I use the same pattern with a single expert - asking them to navigate when the struggle of the others in the ensemble is no longer benefit for their learning. That is, I only step in to show if I know, but I can also ask someone else, or even a volunteer to step in a while. 

4. Stop the Ensemble to Mini-retro

Make space for them to fix themselves. A lot of new ensembles need that. Well, a lot of older ensembles still need someone pointing out they should have a conversation. I once watched Woody Zuill do just that - point out a dynamic that the team needed to have a conversation on. 

Some of my best facilitation tricks are on calling out a retro after just a few rotations. People somehow need a moment where they agree on how to fix their work style to fix the work style. The facilitator can create those spaces. 

5. Set a Constraint

In one of the first ensembles I ever facilitated, I saw my co-facilitator use this to make me the expert of the group and stepping in to navigate. With a twist though - I had narrow rules on the type of work I was supposed to do. The work was exploratory testing, and the new group struggled with note taking. The constraint applied on me was to only improve notes - structure and content of what we had already learned. 

I have used this technique since, and it works great but different groups need different constraints. 

Helping the ensemble figure out what is the scope of the task they are on now is setting these constraints. Thinking in terms of what is included, how to add to what is included only with "yes, and..." rule and parking ideas for future help an ensemble work. 

6. Visual Parking Lot

Create a space, in the documentation or on a whiteboard - to make notes of things you leave for later. People generate great ideas while the work is ongoing and they may forget them by the time we seek for next work to do. Give them a space / mechanism to park those somewhere as they emerge, and call reflection on structuring the parking lot occasionally. 



Saturday, September 25, 2021

Hiring manual and automation testers

 In a meeting about hiring a new tester, a manager asks me: 

Are we looking for a manual or automation tester? 

In my head, the time stops and I have all the time in the world to wonder about the question. I'm neither. Nor is none of my immediate team colleagues. Yet look into the next team and they have one manual one automation tester. No wonder the manager asks this. We've moved from this to the next level. We're neither. We're both. Preferably, we're something different: contemporary exploratory testers. 

In the actual conversation, I don't remember the exact words I use to explain this, but I remember the manager's reaction: "that makes sense". 

We are looking for someone who is *learning* and does not box testing into *manual* and *automation* but builds from a strong foundation of understanding testing but does not stop at the sight of code. 

We want a tester who, when changing the hardware setups and network configurations, also changes the setups in the test automation repos and verifies what ever test we have automated will still run, instead of handing the information and task to someone else. 

We want a tester who reviews other testers' test automation pull requests and proposes improvements both in what gets tested and how it gets tested, and understand what the automation now has. 

We want a tester who reviews application developers' pull requests for scope and risk of change, and target their activities using this information as one source of understanding what they might want the team to test. 

We want a tester who documents their lessons from spending days deeply analyzing features for problems with some tests they leave behind that run in automation. 

We want a tester who talks with documentation specialists, product management, project delivery organization and support, and turns the lessons into investigations that could leave something behind in either unit tests or system level test automation. 

We want a tester who pays attention to the state of the automated tests and volunteers to investigate and fix in the team. 

We want a tester who creates automation to do repeatable driving and monitoring of a feature the team is changing now, and analyze the insights from receptions, as well as consider throwing away the automation because it makes no sense to keep that around continuously. 

We want a tester who will spend four weeks building the most complex analog measurement test setup with resistors and power sources, and understands what of that they find relevant to include in our test automation setups. 

We want our testers to work without having to hand off everything within testing to someone else because it's the only way they imagine how both good testing and good test automation can co-exist. 

I have these testers and I grow these testers. The 14-yo intern that joined us this week has already been a tester like this while working in pairs and ensembles, and picking up tasks they can do. They've written tests in python for APIs and Robot Framework for GUIs, and found critical bugs in ongoing features. 

Hire for potential. Hire for growth. Hire for learning. Hire for attitude. 

If the attitude is missing the power of "yet" as in "I don't automate, yet", or "I don't design versatile tests, yet" it may be a better idea to invest time into someone who already harnesses the power of yet. I require working with code from a tester. But just as much, I require them to be ready to become excellent testers in its own right. 

Sunday, September 19, 2021

There are Plenty of Ways to Talk about Exploratory Testing

Out of all things Ministry of Testing, I am part of a very small corner. I speak at the independent meetups even when they fly under that flag, I speak at their sessions if they invite me (which they don't), and I am a silent lurker on a single channel, exploratory-testing, on their slack. Lurking provided me a piece of inspiration today, an article from Jamaal Todd

Jamaal asked LinkedIn and Reddit about Exploratory Testing to learn that less than 10 % of his respondents don't do exploratory testing, 50-70% give it a resounding yes and there's a fair portion of people who find it at least sometimes worth doing out of the 400 people taking time to click on the polls. 

What Jamaal's article taught me is that a lot of people recognize it as something of value, and it surprised me a little bit. After all as we can see in the responses by one of the terrible twins of testing in the slack, they are doing a lot of communication around the idea that exploratory testing does not exist.

It exists, it is doing well, and we have plenty of ways to talk about it which can be really confusing. 

When I talk of exploratory testing, I frame my things as Contemporary Exploratory Testing. The main difference in how I talk about it is that it includes test automation. Your automated tests call you to explore when they fail, and they are a platform from which you can add power to your exploration. Some of them even do part of the exploration for you.

Not everyone thinks of exploratory testing this way. The testing communities tried labeling different ideas a few decades ago with "schools of testing", and we are still hurting from that effort. When the person assigning others labels does so to promote their own true way, the other labels appear dismissive. "Context-driven" sounds active, intentional. But "Factory" is offensive. 

One of the many things I have learned on programming that naming is one of the hard problems. And a lot of times we try to think in terms of nailing the name early on, whereas we can go about and rename things as we learn their true essence. Instead of stopping to think about the best name, start with some name, and refactor. 

So, we have plenty of ways to talk about exploratory testing, what are they? 

  1. Contemporary Exploratory testing. It includes automation. Programming is non-negotiable but programming is created from smart people. The reason we don't automate (yet) is that we are ramping up skills. 
  2. 3.0 Exploratory testing. It does not exist. There is only testing. And the non-exploratory part we want to call 'checking', mostly to manage the concern that people don't think well while they focus on automation. Also known as RST - Rapid Software Testing. It is all exploratory testing.  
  3. Technique Exploratory Testing. We think of all things we have names for, and yet there is something in testing that we need to do to find the bugs everything else misses. That we call exploratory testing. Managing this technique by sessions of freedom is a convenient way to package the technique. 
  4. Manual Exploratory Testing. It's what testers who don't code do. Essential part of defining it is that automation is not it, usually motivated by testers who have their hands full of valuable work already now without automation. 
  5. Session-based Exploratory Testing. Without management with sessions exploratory testing isn't disciplined and structured. Focus on planning frequently what we use time on and ensure there is enough documentation to satisfy the organization's documentation needs we aren't allowed to renegotiate. 

Lets start with these. Every piece of writing there is on exploratory testing falls into one of these beliefs. The thing is, all that writing is useful and worth reading. It's not about one of these being better, but about you being able to make sense in the idea that NOTHING is one thing when people are involved. 

I invite you all to join the conversation on all kinds of exploratory testing at Exploratory Testing Slack. Link is available with main page of Exploratory Testing Academy


 

Friday, September 10, 2021

The Power of Three Plans

This week, I have taken the inspiration from discussions at FroGSConf last week, and worked on my test plans. And today, I am ready to share that instead of creating one plan, I created three - I call this the power of three. Very often different things will serve you best for what you are trying to plan for, and for the things I wanted, I couldn't do with one. 

The 1A4 Master Test Plan

The first plan I created was a master test plan. Where I work, we have this fairly elaborate template from the times when projects were large and not done in an agile fashion. That plan format overemphasized thinking of details prematurely, but has good ideas behind it, like understanding the relationship of different kinds of testing we will or will not do. 

Analyzing it, I could condense the relevant part of the plans into one A4 with many things that are specific to the fact that we are building embedded systems. 

While I don't think I wrote down anything so insightful into the plan that I could not share the actual plan I created for my project, I opt on the safer side. We did not tick some of these boxes, and with this one-glimpse plan we could see which we did not tick, and had conversation on one of them we should be ticking even if we didn't plan to. 

You can see my plan format has five segments:
  • Quality Target describes the general ideas about the product that drive our idea of how we think around quality for this product. 
  • Software Unit & Integration Testing describes the developer-oriented quality practices we choose from generally. 
  • Hardware Testing addresses the fact that there is a significant overlap and dependency we should have conversations on between hardware and system testing. 
  • System Testing looks at integrated features running on realistic hardware, becoming incrementally more complete in agile projects. 
  • Production Testing addresses the perspectives of hardware being individuals with particular failure modes, and something in assembly of a system, we have customer-specific perspectives to have conversations on. 
For us, different people do these different tests but good testing is done through better relationships between the groups, and establishing trust across quality practices. The conversations leading up to a plan have taken me months, and the plan serves more as a document of what I facilitate than a guideline of how different people will end up dealing with their interdependent responsibilities. 

We could talk about how I came up with the boxes to tick - through workshops of concepts people have in the company and creating structure into the many opinions. A visual workshop wins over writing a plan but we could talk about those in another post later. 

The System Test Strategy

The second plan I created was inspired by the fact that we are struggling with focus. We have a lot of detail, and while I am seeing a structure within the detail, I did not like my attempts of how I wrote it down before. On the course I teach for Exploratory Testing Academy, I have created a super straightforward way of doing a strategy by answering three questions and I posted a sample strategy from the course on twitter. 
I realized I had not written one like this for the project I work in, and got to work and answered those questions. This particular one I even shared with my team, and while I only got comments from one, at their perception was that it was shining light on important risks and reactions in testing. 

In hindsight, my motivation for writing this was two fold. I was thinking of what I would say to the new tester on the ideas that guide our testing as they start in a week, and I was thinking what would help me in pruning out the actions that aren't supporting us in a tight schedule we have ahead of us. 

This plan is actually useful to me in targeting the testing I do, and it might help with some in-team conversations. I accept that no document or written text ever clears it all, but it can serve as an anchor for some group learning. 

The Test Environments Plan

The third plan I produced is a plan of hardware and connections into test environments. If there is one thing that does not move very agile fashion, it is that of getting hardware. And I am not worried of the hardware we build ourselves, but on the mere fact that we ordered 12 mini-PCs off the self type in May, and we expect currently to receive them in December. There's many things in this space that if you don't design in advance, you won't have when you need it. The hardware and systems are particularly tricky in my team with embedded software, since we each have our own environment, we have many shared environments and some environments we want to ensure have little to no need of rebooting to run particular kinds of tests on. 

So I planned the environments. I drew two conceptual schematics of end-to-end environments with necessary connections, separated purposes to different environments, addressed the fact that these environments are for a team of 16 people in my immediate vicinity, and for hundreds of us over the end to end scenario. 

It was not the first time I planned environments, and the main aspects I did this week on this plan is ensuring we have what we need for new hires and new features coming in Fall '21, and that we would have better capabilities in helping project discuss cost and schedule implications to not having some of what we would need. 

The Combination

So I have three plans: 
  • The 1A4 master test plan
  • The system test strategy
  • The test environment plan
For now I think this is what I can work with. And it is sufficient to combine them into just links to each of the three. Smaller chunks are more digestible, and the audiences differ. The first is for everyone testing for a subsystem. The second is for system testing in the subsystem, in integration with other subsystems. The third is for the subsystem to be reused by the other teams this subsystem integrates with. 

I don't expect I need to update any of these plans every agile iteration we do, but the ideas will evolve while they might stand the test of time for next six months. We will see. 


Sunday, September 5, 2021

Test Plans and Templates

Imagine being assigned responsible for helping all projects and products in your organization being started on a good foot in testing. There's enough of them that you just can't imagine being there for all of them to teach them the necessary skills. You've seen a good few being lost on testing, miss out on availability of test environments and data, and projects being delayed. You want to help, give some sort of a guideline.

The usual answer to this is creating a template of some sort, to guide people through important considerations through documenting their thinking. When they document their thinking, others have a chance of saying what is missing. 

If it sounds so lucrative, why is it that these plans often fail? People don't fill in the template - finding little value in it. People fill in the template - the testing done does not match the plan; people don't read the text. And you really can't create skill of thoughtful thinking over a template. 

Yesterday at #frogsconf (Friends of Good Software), one of the conversations was on the test plans and templates we create. As I saw others examples and showed mine, I hated every single document that I had written. The documents are not the conversation that precedes good work. The documents create a shallow bind between the reader and the document, and true co-ownership would require ensemble writing of the document so that it's ours, not mine. And instead of the many words, we'd be better off with filtering the core words out that will lead our efforts. 

My strategy for test automation nowadays distills into one sentence: if we don't change anything, nothing changes for better. 

The less words we write, the more likely we are to get people to read them, hear them, internalize them and use them to guide their work. 

To create a plan, a better approach might be a visual whiteboard with as few sections to fill as possible. Allow them to find their words and concepts to explain how they will do the work. 

I shared an example from the course I have been creating, an example I have experienced to direct students into better testing the application. The problem is, I needed to do the work of testing the entire application to be able to write that document, and that is not what we can expect with projects. 

I have a few plans I need to do next week, so now is my time to think what will those plans look like.